00:00:00.001 Started by upstream project "autotest-per-patch" build number 122817 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.041 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.042 The recommended git tool is: git 00:00:00.042 using credential 00000000-0000-0000-0000-000000000002 00:00:00.048 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.064 Fetching changes from the remote Git repository 00:00:00.066 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.088 Using shallow fetch with depth 1 00:00:00.088 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.088 > git --version # timeout=10 00:00:00.119 > git --version # 'git version 2.39.2' 00:00:00.119 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.119 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.119 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.821 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.833 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.844 Checking out Revision 10da8f6d99838e411e4e94523ded0bfebf3e7100 (FETCH_HEAD) 00:00:03.844 > git config core.sparsecheckout # timeout=10 00:00:03.856 > git read-tree -mu HEAD # timeout=10 00:00:03.871 > git checkout -f 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=5 00:00:03.892 Commit message: "scripts/create_git_mirror: Update path to xnvme submodule" 00:00:03.892 > git rev-list --no-walk 10da8f6d99838e411e4e94523ded0bfebf3e7100 # timeout=10 00:00:03.971 [Pipeline] Start of Pipeline 00:00:03.982 [Pipeline] library 00:00:03.984 Loading library shm_lib@master 00:00:03.984 Library shm_lib@master is cached. Copying from home. 00:00:03.999 [Pipeline] node 00:00:04.003 Running on FCP07 in /var/jenkins/workspace/dsa-phy-autotest 00:00:04.006 [Pipeline] { 00:00:04.013 [Pipeline] catchError 00:00:04.014 [Pipeline] { 00:00:04.024 [Pipeline] wrap 00:00:04.031 [Pipeline] { 00:00:04.037 [Pipeline] stage 00:00:04.039 [Pipeline] { (Prologue) 00:00:04.210 [Pipeline] sh 00:00:04.493 + logger -p user.info -t JENKINS-CI 00:00:04.506 [Pipeline] echo 00:00:04.508 Node: FCP07 00:00:04.514 [Pipeline] sh 00:00:04.812 [Pipeline] setCustomBuildProperty 00:00:04.821 [Pipeline] echo 00:00:04.822 Cleanup processes 00:00:04.825 [Pipeline] sh 00:00:05.107 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:05.107 1668874 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:05.120 [Pipeline] sh 00:00:05.405 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:05.405 ++ grep -v 'sudo pgrep' 00:00:05.405 ++ awk '{print $1}' 00:00:05.405 + sudo kill -9 00:00:05.406 + true 00:00:05.419 [Pipeline] cleanWs 00:00:05.428 [WS-CLEANUP] Deleting project workspace... 00:00:05.428 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.434 [WS-CLEANUP] done 00:00:05.438 [Pipeline] setCustomBuildProperty 00:00:05.450 [Pipeline] sh 00:00:05.733 + sudo git config --global --replace-all safe.directory '*' 00:00:05.799 [Pipeline] nodesByLabel 00:00:05.801 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.809 [Pipeline] httpRequest 00:00:05.813 HttpMethod: GET 00:00:05.814 URL: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:05.819 Sending request to url: http://10.211.164.101/packages/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:05.837 Response Code: HTTP/1.1 200 OK 00:00:05.837 Success: Status code 200 is in the accepted range: 200,404 00:00:05.837 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:08.421 [Pipeline] sh 00:00:08.705 + tar --no-same-owner -xf jbp_10da8f6d99838e411e4e94523ded0bfebf3e7100.tar.gz 00:00:08.723 [Pipeline] httpRequest 00:00:08.728 HttpMethod: GET 00:00:08.729 URL: http://10.211.164.101/packages/spdk_68960dff26103c36bc69a94395cbcf426be30468.tar.gz 00:00:08.730 Sending request to url: http://10.211.164.101/packages/spdk_68960dff26103c36bc69a94395cbcf426be30468.tar.gz 00:00:08.747 Response Code: HTTP/1.1 200 OK 00:00:08.748 Success: Status code 200 is in the accepted range: 200,404 00:00:08.748 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/spdk_68960dff26103c36bc69a94395cbcf426be30468.tar.gz 00:01:36.925 [Pipeline] sh 00:01:37.213 + tar --no-same-owner -xf spdk_68960dff26103c36bc69a94395cbcf426be30468.tar.gz 00:01:39.769 [Pipeline] sh 00:01:40.051 + git -C spdk log --oneline -n5 00:01:40.051 68960dff2 lib/event: Bug fix for framework_set_scheduler 00:01:40.051 4506c0c36 test/common: Enable inherit_errexit 00:01:40.051 b24df7cfa test: Drop superfluous calls to print_backtrace() 00:01:40.051 7b52e4c17 test/scheduler: Meassure utime of $spdk_pid threads as a fallback 00:01:40.051 1dc065205 test/scheduler: Calculate median of the cpu load samples 00:01:40.063 [Pipeline] } 00:01:40.078 [Pipeline] // stage 00:01:40.086 [Pipeline] stage 00:01:40.088 [Pipeline] { (Prepare) 00:01:40.108 [Pipeline] writeFile 00:01:40.127 [Pipeline] sh 00:01:40.416 + logger -p user.info -t JENKINS-CI 00:01:40.430 [Pipeline] sh 00:01:40.716 + logger -p user.info -t JENKINS-CI 00:01:40.729 [Pipeline] sh 00:01:41.016 + cat autorun-spdk.conf 00:01:41.016 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.016 SPDK_TEST_ACCEL_DSA=1 00:01:41.016 SPDK_TEST_ACCEL_IAA=1 00:01:41.016 SPDK_TEST_NVMF=1 00:01:41.016 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.016 SPDK_RUN_ASAN=1 00:01:41.016 SPDK_RUN_UBSAN=1 00:01:41.024 RUN_NIGHTLY=0 00:01:41.029 [Pipeline] readFile 00:01:41.055 [Pipeline] withEnv 00:01:41.058 [Pipeline] { 00:01:41.073 [Pipeline] sh 00:01:41.360 + set -ex 00:01:41.360 + [[ -f /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf ]] 00:01:41.360 + source /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:01:41.360 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.360 ++ SPDK_TEST_ACCEL_DSA=1 00:01:41.360 ++ SPDK_TEST_ACCEL_IAA=1 00:01:41.360 ++ SPDK_TEST_NVMF=1 00:01:41.360 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.360 ++ SPDK_RUN_ASAN=1 00:01:41.360 ++ SPDK_RUN_UBSAN=1 00:01:41.360 ++ RUN_NIGHTLY=0 00:01:41.360 + case $SPDK_TEST_NVMF_NICS in 00:01:41.360 + DRIVERS= 00:01:41.360 + [[ -n '' ]] 00:01:41.360 + exit 0 00:01:41.372 [Pipeline] } 00:01:41.394 [Pipeline] // withEnv 00:01:41.400 [Pipeline] } 00:01:41.417 [Pipeline] // stage 00:01:41.427 [Pipeline] catchError 00:01:41.429 [Pipeline] { 00:01:41.444 [Pipeline] timeout 00:01:41.445 Timeout set to expire in 50 min 00:01:41.447 [Pipeline] { 00:01:41.463 [Pipeline] stage 00:01:41.465 [Pipeline] { (Tests) 00:01:41.481 [Pipeline] sh 00:01:41.768 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/dsa-phy-autotest 00:01:41.768 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest 00:01:41.768 + DIR_ROOT=/var/jenkins/workspace/dsa-phy-autotest 00:01:41.768 + [[ -n /var/jenkins/workspace/dsa-phy-autotest ]] 00:01:41.768 + DIR_SPDK=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:41.768 + DIR_OUTPUT=/var/jenkins/workspace/dsa-phy-autotest/output 00:01:41.768 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/spdk ]] 00:01:41.768 + [[ ! -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:01:41.768 + mkdir -p /var/jenkins/workspace/dsa-phy-autotest/output 00:01:41.768 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:01:41.768 + cd /var/jenkins/workspace/dsa-phy-autotest 00:01:41.768 + source /etc/os-release 00:01:41.768 ++ NAME='Fedora Linux' 00:01:41.768 ++ VERSION='38 (Cloud Edition)' 00:01:41.768 ++ ID=fedora 00:01:41.768 ++ VERSION_ID=38 00:01:41.768 ++ VERSION_CODENAME= 00:01:41.768 ++ PLATFORM_ID=platform:f38 00:01:41.768 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:41.768 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:41.768 ++ LOGO=fedora-logo-icon 00:01:41.768 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:41.768 ++ HOME_URL=https://fedoraproject.org/ 00:01:41.768 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:41.768 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:41.768 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:41.768 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:41.768 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:41.768 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:41.768 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:41.768 ++ SUPPORT_END=2024-05-14 00:01:41.768 ++ VARIANT='Cloud Edition' 00:01:41.768 ++ VARIANT_ID=cloud 00:01:41.768 + uname -a 00:01:41.768 Linux spdk-fcp-07 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:41.768 + sudo /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:01:44.314 Hugepages 00:01:44.314 node hugesize free / total 00:01:44.314 node0 1048576kB 0 / 0 00:01:44.314 node0 2048kB 0 / 0 00:01:44.314 node1 1048576kB 0 / 0 00:01:44.314 node1 2048kB 0 / 0 00:01:44.314 00:01:44.314 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:44.314 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:01:44.314 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:01:44.314 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:01:44.314 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:01:44.314 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:01:44.314 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:01:44.314 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:01:44.314 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:01:44.314 NVMe 0000:c9:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:44.575 NVMe 0000:ca:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:01:44.575 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:01:44.575 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:01:44.575 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:01:44.575 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:01:44.575 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:01:44.575 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:01:44.575 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:01:44.575 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:01:44.575 + rm -f /tmp/spdk-ld-path 00:01:44.575 + source autorun-spdk.conf 00:01:44.575 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.575 ++ SPDK_TEST_ACCEL_DSA=1 00:01:44.575 ++ SPDK_TEST_ACCEL_IAA=1 00:01:44.575 ++ SPDK_TEST_NVMF=1 00:01:44.575 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:44.575 ++ SPDK_RUN_ASAN=1 00:01:44.575 ++ SPDK_RUN_UBSAN=1 00:01:44.575 ++ RUN_NIGHTLY=0 00:01:44.575 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:44.575 + [[ -n '' ]] 00:01:44.575 + sudo git config --global --add safe.directory /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:44.575 + for M in /var/spdk/build-*-manifest.txt 00:01:44.575 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:44.575 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:01:44.575 + for M in /var/spdk/build-*-manifest.txt 00:01:44.575 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:44.575 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:01:44.575 ++ uname 00:01:44.575 + [[ Linux == \L\i\n\u\x ]] 00:01:44.575 + sudo dmesg -T 00:01:44.575 + sudo dmesg --clear 00:01:44.575 + dmesg_pid=1669909 00:01:44.575 + [[ Fedora Linux == FreeBSD ]] 00:01:44.575 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.575 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.575 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:44.575 + [[ -x /usr/src/fio-static/fio ]] 00:01:44.575 + export FIO_BIN=/usr/src/fio-static/fio 00:01:44.575 + FIO_BIN=/usr/src/fio-static/fio 00:01:44.575 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\d\s\a\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:44.575 + sudo dmesg -Tw 00:01:44.575 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:44.575 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:44.575 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.575 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.575 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:44.575 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.575 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.575 + spdk/autorun.sh /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:01:44.575 Test configuration: 00:01:44.575 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.575 SPDK_TEST_ACCEL_DSA=1 00:01:44.575 SPDK_TEST_ACCEL_IAA=1 00:01:44.575 SPDK_TEST_NVMF=1 00:01:44.575 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:44.575 SPDK_RUN_ASAN=1 00:01:44.575 SPDK_RUN_UBSAN=1 00:01:44.575 RUN_NIGHTLY=0 00:17:10 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:01:44.575 00:17:10 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:44.575 00:17:10 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.575 00:17:10 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.575 00:17:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.575 00:17:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.575 00:17:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.575 00:17:10 -- paths/export.sh@5 -- $ export PATH 00:01:44.575 00:17:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.575 00:17:10 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:01:44.575 00:17:10 -- common/autobuild_common.sh@437 -- $ date +%s 00:01:44.575 00:17:10 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715725030.XXXXXX 00:01:44.575 00:17:10 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715725030.oGIvco 00:01:44.575 00:17:10 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:01:44.575 00:17:10 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:01:44.575 00:17:10 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:01:44.575 00:17:10 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:44.575 00:17:10 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:44.575 00:17:10 -- common/autobuild_common.sh@453 -- $ get_config_params 00:01:44.575 00:17:10 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:44.576 00:17:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.576 00:17:10 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:01:44.576 00:17:10 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:01:44.576 00:17:10 -- pm/common@17 -- $ local monitor 00:01:44.576 00:17:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.576 00:17:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.837 00:17:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.837 00:17:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.837 00:17:10 -- pm/common@21 -- $ date +%s 00:01:44.837 00:17:10 -- pm/common@25 -- $ sleep 1 00:01:44.837 00:17:10 -- pm/common@21 -- $ date +%s 00:01:44.837 00:17:10 -- pm/common@21 -- $ date +%s 00:01:44.837 00:17:10 -- pm/common@21 -- $ date +%s 00:01:44.837 00:17:10 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715725030 00:01:44.837 00:17:10 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715725030 00:01:44.838 00:17:10 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715725030 00:01:44.838 00:17:10 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1715725030 00:01:44.838 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715725030_collect-vmstat.pm.log 00:01:44.838 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715725030_collect-cpu-load.pm.log 00:01:44.838 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715725030_collect-cpu-temp.pm.log 00:01:44.838 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1715725030_collect-bmc-pm.bmc.pm.log 00:01:45.779 00:17:11 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:01:45.779 00:17:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:45.779 00:17:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:45.779 00:17:11 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:45.779 00:17:11 -- spdk/autobuild.sh@16 -- $ date -u 00:01:45.779 Tue May 14 10:17:11 PM UTC 2024 00:01:45.779 00:17:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:45.779 v24.05-pre-659-g68960dff2 00:01:45.779 00:17:11 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:45.779 00:17:11 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:45.779 00:17:11 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:01:45.780 00:17:11 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:01:45.780 00:17:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.780 ************************************ 00:01:45.780 START TEST asan 00:01:45.780 ************************************ 00:01:45.780 00:17:11 asan -- common/autotest_common.sh@1122 -- $ echo 'using asan' 00:01:45.780 using asan 00:01:45.780 00:01:45.780 real 0m0.000s 00:01:45.780 user 0m0.000s 00:01:45.780 sys 0m0.000s 00:01:45.780 00:17:11 asan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:01:45.780 00:17:11 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:45.780 ************************************ 00:01:45.780 END TEST asan 00:01:45.780 ************************************ 00:01:45.780 00:17:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:45.780 00:17:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:45.780 00:17:11 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:01:45.780 00:17:11 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:01:45.780 00:17:11 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.780 ************************************ 00:01:45.780 START TEST ubsan 00:01:45.780 ************************************ 00:01:45.780 00:17:11 ubsan -- common/autotest_common.sh@1122 -- $ echo 'using ubsan' 00:01:45.780 using ubsan 00:01:45.780 00:01:45.780 real 0m0.000s 00:01:45.780 user 0m0.000s 00:01:45.780 sys 0m0.000s 00:01:45.780 00:17:11 ubsan -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:01:45.780 00:17:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:45.780 ************************************ 00:01:45.780 END TEST ubsan 00:01:45.780 ************************************ 00:01:45.780 00:17:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:45.780 00:17:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:45.780 00:17:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:45.780 00:17:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:45.780 00:17:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:45.780 00:17:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:45.780 00:17:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:45.780 00:17:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:45.780 00:17:11 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:01:46.041 Using default SPDK env in /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:01:46.041 Using default DPDK in /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:01:46.301 Using 'verbs' RDMA provider 00:01:59.101 Configuring ISA-L (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:09.097 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:09.355 Creating mk/config.mk...done. 00:02:09.355 Creating mk/cc.flags.mk...done. 00:02:09.355 Type 'make' to build. 00:02:09.355 00:17:35 -- spdk/autobuild.sh@69 -- $ run_test make make -j128 00:02:09.355 00:17:35 -- common/autotest_common.sh@1098 -- $ '[' 3 -le 1 ']' 00:02:09.355 00:17:35 -- common/autotest_common.sh@1104 -- $ xtrace_disable 00:02:09.355 00:17:35 -- common/autotest_common.sh@10 -- $ set +x 00:02:09.355 ************************************ 00:02:09.355 START TEST make 00:02:09.355 ************************************ 00:02:09.355 00:17:35 make -- common/autotest_common.sh@1122 -- $ make -j128 00:02:09.614 make[1]: Nothing to be done for 'all'. 00:02:16.262 The Meson build system 00:02:16.262 Version: 1.3.1 00:02:16.262 Source dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk 00:02:16.262 Build dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp 00:02:16.262 Build type: native build 00:02:16.262 Program cat found: YES (/usr/bin/cat) 00:02:16.262 Project name: DPDK 00:02:16.262 Project version: 23.11.0 00:02:16.262 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:16.262 C linker for the host machine: cc ld.bfd 2.39-16 00:02:16.262 Host machine cpu family: x86_64 00:02:16.262 Host machine cpu: x86_64 00:02:16.262 Message: ## Building in Developer Mode ## 00:02:16.262 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:16.262 Program check-symbols.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:16.262 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:16.262 Program python3 found: YES (/usr/bin/python3) 00:02:16.262 Program cat found: YES (/usr/bin/cat) 00:02:16.262 Compiler for C supports arguments -march=native: YES 00:02:16.262 Checking for size of "void *" : 8 00:02:16.262 Checking for size of "void *" : 8 (cached) 00:02:16.262 Library m found: YES 00:02:16.262 Library numa found: YES 00:02:16.262 Has header "numaif.h" : YES 00:02:16.262 Library fdt found: NO 00:02:16.262 Library execinfo found: NO 00:02:16.262 Has header "execinfo.h" : YES 00:02:16.262 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:16.262 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:16.262 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:16.262 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:16.262 Run-time dependency openssl found: YES 3.0.9 00:02:16.262 Run-time dependency libpcap found: YES 1.10.4 00:02:16.262 Has header "pcap.h" with dependency libpcap: YES 00:02:16.262 Compiler for C supports arguments -Wcast-qual: YES 00:02:16.262 Compiler for C supports arguments -Wdeprecated: YES 00:02:16.262 Compiler for C supports arguments -Wformat: YES 00:02:16.262 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:16.262 Compiler for C supports arguments -Wformat-security: NO 00:02:16.262 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:16.262 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:16.262 Compiler for C supports arguments -Wnested-externs: YES 00:02:16.262 Compiler for C supports arguments -Wold-style-definition: YES 00:02:16.262 Compiler for C supports arguments -Wpointer-arith: YES 00:02:16.262 Compiler for C supports arguments -Wsign-compare: YES 00:02:16.262 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:16.262 Compiler for C supports arguments -Wundef: YES 00:02:16.262 Compiler for C supports arguments -Wwrite-strings: YES 00:02:16.262 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:16.262 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:16.262 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:16.262 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:16.262 Program objdump found: YES (/usr/bin/objdump) 00:02:16.262 Compiler for C supports arguments -mavx512f: YES 00:02:16.262 Checking if "AVX512 checking" compiles: YES 00:02:16.262 Fetching value of define "__SSE4_2__" : 1 00:02:16.262 Fetching value of define "__AES__" : 1 00:02:16.262 Fetching value of define "__AVX__" : 1 00:02:16.262 Fetching value of define "__AVX2__" : 1 00:02:16.262 Fetching value of define "__AVX512BW__" : 1 00:02:16.262 Fetching value of define "__AVX512CD__" : 1 00:02:16.262 Fetching value of define "__AVX512DQ__" : 1 00:02:16.262 Fetching value of define "__AVX512F__" : 1 00:02:16.262 Fetching value of define "__AVX512VL__" : 1 00:02:16.262 Fetching value of define "__PCLMUL__" : 1 00:02:16.262 Fetching value of define "__RDRND__" : 1 00:02:16.262 Fetching value of define "__RDSEED__" : 1 00:02:16.262 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:16.262 Fetching value of define "__znver1__" : (undefined) 00:02:16.262 Fetching value of define "__znver2__" : (undefined) 00:02:16.262 Fetching value of define "__znver3__" : (undefined) 00:02:16.262 Fetching value of define "__znver4__" : (undefined) 00:02:16.262 Library asan found: YES 00:02:16.262 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:16.262 Message: lib/log: Defining dependency "log" 00:02:16.262 Message: lib/kvargs: Defining dependency "kvargs" 00:02:16.262 Message: lib/telemetry: Defining dependency "telemetry" 00:02:16.262 Library rt found: YES 00:02:16.262 Checking for function "getentropy" : NO 00:02:16.262 Message: lib/eal: Defining dependency "eal" 00:02:16.262 Message: lib/ring: Defining dependency "ring" 00:02:16.262 Message: lib/rcu: Defining dependency "rcu" 00:02:16.262 Message: lib/mempool: Defining dependency "mempool" 00:02:16.262 Message: lib/mbuf: Defining dependency "mbuf" 00:02:16.262 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:16.262 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:16.262 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:16.262 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:16.262 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:16.262 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:16.262 Compiler for C supports arguments -mpclmul: YES 00:02:16.262 Compiler for C supports arguments -maes: YES 00:02:16.262 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:16.262 Compiler for C supports arguments -mavx512bw: YES 00:02:16.262 Compiler for C supports arguments -mavx512dq: YES 00:02:16.262 Compiler for C supports arguments -mavx512vl: YES 00:02:16.262 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:16.262 Compiler for C supports arguments -mavx2: YES 00:02:16.262 Compiler for C supports arguments -mavx: YES 00:02:16.262 Message: lib/net: Defining dependency "net" 00:02:16.262 Message: lib/meter: Defining dependency "meter" 00:02:16.262 Message: lib/ethdev: Defining dependency "ethdev" 00:02:16.262 Message: lib/pci: Defining dependency "pci" 00:02:16.262 Message: lib/cmdline: Defining dependency "cmdline" 00:02:16.262 Message: lib/hash: Defining dependency "hash" 00:02:16.262 Message: lib/timer: Defining dependency "timer" 00:02:16.262 Message: lib/compressdev: Defining dependency "compressdev" 00:02:16.262 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:16.262 Message: lib/dmadev: Defining dependency "dmadev" 00:02:16.262 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:16.262 Message: lib/power: Defining dependency "power" 00:02:16.262 Message: lib/reorder: Defining dependency "reorder" 00:02:16.262 Message: lib/security: Defining dependency "security" 00:02:16.262 Has header "linux/userfaultfd.h" : YES 00:02:16.262 Has header "linux/vduse.h" : YES 00:02:16.262 Message: lib/vhost: Defining dependency "vhost" 00:02:16.262 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:16.262 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:16.262 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:16.262 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:16.262 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:16.262 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:16.262 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:16.262 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:16.262 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:16.262 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:16.262 Program doxygen found: YES (/usr/bin/doxygen) 00:02:16.262 Configuring doxy-api-html.conf using configuration 00:02:16.262 Configuring doxy-api-man.conf using configuration 00:02:16.262 Program mandb found: YES (/usr/bin/mandb) 00:02:16.262 Program sphinx-build found: NO 00:02:16.262 Configuring rte_build_config.h using configuration 00:02:16.262 Message: 00:02:16.262 ================= 00:02:16.262 Applications Enabled 00:02:16.262 ================= 00:02:16.262 00:02:16.262 apps: 00:02:16.262 00:02:16.262 00:02:16.262 Message: 00:02:16.262 ================= 00:02:16.262 Libraries Enabled 00:02:16.262 ================= 00:02:16.262 00:02:16.262 libs: 00:02:16.262 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:16.262 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:16.262 cryptodev, dmadev, power, reorder, security, vhost, 00:02:16.262 00:02:16.262 Message: 00:02:16.262 =============== 00:02:16.262 Drivers Enabled 00:02:16.262 =============== 00:02:16.262 00:02:16.262 common: 00:02:16.262 00:02:16.262 bus: 00:02:16.262 pci, vdev, 00:02:16.262 mempool: 00:02:16.262 ring, 00:02:16.262 dma: 00:02:16.262 00:02:16.262 net: 00:02:16.262 00:02:16.262 crypto: 00:02:16.262 00:02:16.262 compress: 00:02:16.262 00:02:16.263 vdpa: 00:02:16.263 00:02:16.263 00:02:16.263 Message: 00:02:16.263 ================= 00:02:16.263 Content Skipped 00:02:16.263 ================= 00:02:16.263 00:02:16.263 apps: 00:02:16.263 dumpcap: explicitly disabled via build config 00:02:16.263 graph: explicitly disabled via build config 00:02:16.263 pdump: explicitly disabled via build config 00:02:16.263 proc-info: explicitly disabled via build config 00:02:16.263 test-acl: explicitly disabled via build config 00:02:16.263 test-bbdev: explicitly disabled via build config 00:02:16.263 test-cmdline: explicitly disabled via build config 00:02:16.263 test-compress-perf: explicitly disabled via build config 00:02:16.263 test-crypto-perf: explicitly disabled via build config 00:02:16.263 test-dma-perf: explicitly disabled via build config 00:02:16.263 test-eventdev: explicitly disabled via build config 00:02:16.263 test-fib: explicitly disabled via build config 00:02:16.263 test-flow-perf: explicitly disabled via build config 00:02:16.263 test-gpudev: explicitly disabled via build config 00:02:16.263 test-mldev: explicitly disabled via build config 00:02:16.263 test-pipeline: explicitly disabled via build config 00:02:16.263 test-pmd: explicitly disabled via build config 00:02:16.263 test-regex: explicitly disabled via build config 00:02:16.263 test-sad: explicitly disabled via build config 00:02:16.263 test-security-perf: explicitly disabled via build config 00:02:16.263 00:02:16.263 libs: 00:02:16.263 metrics: explicitly disabled via build config 00:02:16.263 acl: explicitly disabled via build config 00:02:16.263 bbdev: explicitly disabled via build config 00:02:16.263 bitratestats: explicitly disabled via build config 00:02:16.263 bpf: explicitly disabled via build config 00:02:16.263 cfgfile: explicitly disabled via build config 00:02:16.263 distributor: explicitly disabled via build config 00:02:16.263 efd: explicitly disabled via build config 00:02:16.263 eventdev: explicitly disabled via build config 00:02:16.263 dispatcher: explicitly disabled via build config 00:02:16.263 gpudev: explicitly disabled via build config 00:02:16.263 gro: explicitly disabled via build config 00:02:16.263 gso: explicitly disabled via build config 00:02:16.263 ip_frag: explicitly disabled via build config 00:02:16.263 jobstats: explicitly disabled via build config 00:02:16.263 latencystats: explicitly disabled via build config 00:02:16.263 lpm: explicitly disabled via build config 00:02:16.263 member: explicitly disabled via build config 00:02:16.263 pcapng: explicitly disabled via build config 00:02:16.263 rawdev: explicitly disabled via build config 00:02:16.263 regexdev: explicitly disabled via build config 00:02:16.263 mldev: explicitly disabled via build config 00:02:16.263 rib: explicitly disabled via build config 00:02:16.263 sched: explicitly disabled via build config 00:02:16.263 stack: explicitly disabled via build config 00:02:16.263 ipsec: explicitly disabled via build config 00:02:16.263 pdcp: explicitly disabled via build config 00:02:16.263 fib: explicitly disabled via build config 00:02:16.263 port: explicitly disabled via build config 00:02:16.263 pdump: explicitly disabled via build config 00:02:16.263 table: explicitly disabled via build config 00:02:16.263 pipeline: explicitly disabled via build config 00:02:16.263 graph: explicitly disabled via build config 00:02:16.263 node: explicitly disabled via build config 00:02:16.263 00:02:16.263 drivers: 00:02:16.263 common/cpt: not in enabled drivers build config 00:02:16.263 common/dpaax: not in enabled drivers build config 00:02:16.263 common/iavf: not in enabled drivers build config 00:02:16.263 common/idpf: not in enabled drivers build config 00:02:16.263 common/mvep: not in enabled drivers build config 00:02:16.263 common/octeontx: not in enabled drivers build config 00:02:16.263 bus/auxiliary: not in enabled drivers build config 00:02:16.263 bus/cdx: not in enabled drivers build config 00:02:16.263 bus/dpaa: not in enabled drivers build config 00:02:16.263 bus/fslmc: not in enabled drivers build config 00:02:16.263 bus/ifpga: not in enabled drivers build config 00:02:16.263 bus/platform: not in enabled drivers build config 00:02:16.263 bus/vmbus: not in enabled drivers build config 00:02:16.263 common/cnxk: not in enabled drivers build config 00:02:16.263 common/mlx5: not in enabled drivers build config 00:02:16.263 common/nfp: not in enabled drivers build config 00:02:16.263 common/qat: not in enabled drivers build config 00:02:16.263 common/sfc_efx: not in enabled drivers build config 00:02:16.263 mempool/bucket: not in enabled drivers build config 00:02:16.263 mempool/cnxk: not in enabled drivers build config 00:02:16.263 mempool/dpaa: not in enabled drivers build config 00:02:16.263 mempool/dpaa2: not in enabled drivers build config 00:02:16.263 mempool/octeontx: not in enabled drivers build config 00:02:16.263 mempool/stack: not in enabled drivers build config 00:02:16.263 dma/cnxk: not in enabled drivers build config 00:02:16.263 dma/dpaa: not in enabled drivers build config 00:02:16.263 dma/dpaa2: not in enabled drivers build config 00:02:16.263 dma/hisilicon: not in enabled drivers build config 00:02:16.263 dma/idxd: not in enabled drivers build config 00:02:16.263 dma/ioat: not in enabled drivers build config 00:02:16.263 dma/skeleton: not in enabled drivers build config 00:02:16.263 net/af_packet: not in enabled drivers build config 00:02:16.263 net/af_xdp: not in enabled drivers build config 00:02:16.263 net/ark: not in enabled drivers build config 00:02:16.263 net/atlantic: not in enabled drivers build config 00:02:16.263 net/avp: not in enabled drivers build config 00:02:16.263 net/axgbe: not in enabled drivers build config 00:02:16.263 net/bnx2x: not in enabled drivers build config 00:02:16.263 net/bnxt: not in enabled drivers build config 00:02:16.263 net/bonding: not in enabled drivers build config 00:02:16.263 net/cnxk: not in enabled drivers build config 00:02:16.263 net/cpfl: not in enabled drivers build config 00:02:16.263 net/cxgbe: not in enabled drivers build config 00:02:16.263 net/dpaa: not in enabled drivers build config 00:02:16.263 net/dpaa2: not in enabled drivers build config 00:02:16.263 net/e1000: not in enabled drivers build config 00:02:16.263 net/ena: not in enabled drivers build config 00:02:16.263 net/enetc: not in enabled drivers build config 00:02:16.263 net/enetfec: not in enabled drivers build config 00:02:16.263 net/enic: not in enabled drivers build config 00:02:16.263 net/failsafe: not in enabled drivers build config 00:02:16.263 net/fm10k: not in enabled drivers build config 00:02:16.263 net/gve: not in enabled drivers build config 00:02:16.263 net/hinic: not in enabled drivers build config 00:02:16.263 net/hns3: not in enabled drivers build config 00:02:16.263 net/i40e: not in enabled drivers build config 00:02:16.263 net/iavf: not in enabled drivers build config 00:02:16.263 net/ice: not in enabled drivers build config 00:02:16.263 net/idpf: not in enabled drivers build config 00:02:16.263 net/igc: not in enabled drivers build config 00:02:16.263 net/ionic: not in enabled drivers build config 00:02:16.263 net/ipn3ke: not in enabled drivers build config 00:02:16.263 net/ixgbe: not in enabled drivers build config 00:02:16.263 net/mana: not in enabled drivers build config 00:02:16.263 net/memif: not in enabled drivers build config 00:02:16.263 net/mlx4: not in enabled drivers build config 00:02:16.263 net/mlx5: not in enabled drivers build config 00:02:16.263 net/mvneta: not in enabled drivers build config 00:02:16.263 net/mvpp2: not in enabled drivers build config 00:02:16.263 net/netvsc: not in enabled drivers build config 00:02:16.263 net/nfb: not in enabled drivers build config 00:02:16.263 net/nfp: not in enabled drivers build config 00:02:16.263 net/ngbe: not in enabled drivers build config 00:02:16.263 net/null: not in enabled drivers build config 00:02:16.263 net/octeontx: not in enabled drivers build config 00:02:16.263 net/octeon_ep: not in enabled drivers build config 00:02:16.263 net/pcap: not in enabled drivers build config 00:02:16.263 net/pfe: not in enabled drivers build config 00:02:16.263 net/qede: not in enabled drivers build config 00:02:16.263 net/ring: not in enabled drivers build config 00:02:16.263 net/sfc: not in enabled drivers build config 00:02:16.263 net/softnic: not in enabled drivers build config 00:02:16.263 net/tap: not in enabled drivers build config 00:02:16.263 net/thunderx: not in enabled drivers build config 00:02:16.263 net/txgbe: not in enabled drivers build config 00:02:16.263 net/vdev_netvsc: not in enabled drivers build config 00:02:16.263 net/vhost: not in enabled drivers build config 00:02:16.263 net/virtio: not in enabled drivers build config 00:02:16.263 net/vmxnet3: not in enabled drivers build config 00:02:16.263 raw/*: missing internal dependency, "rawdev" 00:02:16.263 crypto/armv8: not in enabled drivers build config 00:02:16.263 crypto/bcmfs: not in enabled drivers build config 00:02:16.263 crypto/caam_jr: not in enabled drivers build config 00:02:16.263 crypto/ccp: not in enabled drivers build config 00:02:16.263 crypto/cnxk: not in enabled drivers build config 00:02:16.263 crypto/dpaa_sec: not in enabled drivers build config 00:02:16.263 crypto/dpaa2_sec: not in enabled drivers build config 00:02:16.263 crypto/ipsec_mb: not in enabled drivers build config 00:02:16.263 crypto/mlx5: not in enabled drivers build config 00:02:16.263 crypto/mvsam: not in enabled drivers build config 00:02:16.263 crypto/nitrox: not in enabled drivers build config 00:02:16.263 crypto/null: not in enabled drivers build config 00:02:16.263 crypto/octeontx: not in enabled drivers build config 00:02:16.263 crypto/openssl: not in enabled drivers build config 00:02:16.263 crypto/scheduler: not in enabled drivers build config 00:02:16.263 crypto/uadk: not in enabled drivers build config 00:02:16.263 crypto/virtio: not in enabled drivers build config 00:02:16.264 compress/isal: not in enabled drivers build config 00:02:16.264 compress/mlx5: not in enabled drivers build config 00:02:16.264 compress/octeontx: not in enabled drivers build config 00:02:16.264 compress/zlib: not in enabled drivers build config 00:02:16.264 regex/*: missing internal dependency, "regexdev" 00:02:16.264 ml/*: missing internal dependency, "mldev" 00:02:16.264 vdpa/ifc: not in enabled drivers build config 00:02:16.264 vdpa/mlx5: not in enabled drivers build config 00:02:16.264 vdpa/nfp: not in enabled drivers build config 00:02:16.264 vdpa/sfc: not in enabled drivers build config 00:02:16.264 event/*: missing internal dependency, "eventdev" 00:02:16.264 baseband/*: missing internal dependency, "bbdev" 00:02:16.264 gpu/*: missing internal dependency, "gpudev" 00:02:16.264 00:02:16.264 00:02:16.264 Build targets in project: 84 00:02:16.264 00:02:16.264 DPDK 23.11.0 00:02:16.264 00:02:16.264 User defined options 00:02:16.264 buildtype : debug 00:02:16.264 default_library : shared 00:02:16.264 libdir : lib 00:02:16.264 prefix : /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:02:16.264 b_sanitize : address 00:02:16.264 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:16.264 c_link_args : 00:02:16.264 cpu_instruction_set: native 00:02:16.264 disable_apps : test-sad,graph,test-regex,dumpcap,test-eventdev,test-compress-perf,pdump,test-security-perf,test-pmd,test-flow-perf,test-pipeline,test-crypto-perf,test-gpudev,test-cmdline,test-dma-perf,proc-info,test-bbdev,test-acl,test,test-mldev,test-fib 00:02:16.264 disable_libs : sched,port,dispatcher,graph,rawdev,pdcp,bitratestats,ipsec,pcapng,pdump,gso,cfgfile,gpudev,ip_frag,node,distributor,mldev,lpm,acl,bpf,latencystats,eventdev,regexdev,gro,stack,fib,pipeline,bbdev,table,metrics,member,jobstats,efd,rib 00:02:16.264 enable_docs : false 00:02:16.264 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:16.264 enable_kmods : false 00:02:16.264 tests : false 00:02:16.264 00:02:16.264 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.264 ninja: Entering directory `/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp' 00:02:16.264 [1/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:16.264 [2/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:16.264 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:16.264 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:16.264 [5/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:16.264 [6/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:16.264 [7/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:16.264 [8/264] Linking static target lib/librte_kvargs.a 00:02:16.264 [9/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:16.264 [10/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:16.264 [11/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:16.264 [12/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:16.264 [13/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:16.264 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:16.264 [15/264] Linking static target lib/librte_log.a 00:02:16.264 [16/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:16.264 [17/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:16.264 [18/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:16.264 [19/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:16.264 [20/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:16.264 [21/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:16.264 [22/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:16.264 [23/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:16.264 [24/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:16.264 [25/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:16.264 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:16.264 [27/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:16.264 [28/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:16.264 [29/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:16.264 [30/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:16.264 [31/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:16.264 [32/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:16.264 [33/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:16.264 [34/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:16.264 [35/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:16.264 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:16.264 [37/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:16.264 [38/264] Linking static target lib/librte_pci.a 00:02:16.264 [39/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:16.264 [40/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:16.522 [41/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:16.522 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:16.522 [43/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:16.522 [44/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:16.522 [45/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:16.522 [46/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:16.522 [47/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:16.522 [48/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:16.522 [49/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:16.522 [50/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:16.522 [51/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:16.522 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:16.522 [53/264] Linking static target lib/librte_telemetry.a 00:02:16.522 [54/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:16.523 [55/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.523 [56/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:16.523 [57/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:16.523 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:16.523 [59/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:16.523 [60/264] Linking static target lib/librte_meter.a 00:02:16.523 [61/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:16.523 [62/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:16.523 [63/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:16.523 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:16.523 [65/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:16.523 [66/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:16.523 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:16.523 [68/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:16.523 [69/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:16.523 [70/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:16.523 [71/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:16.523 [72/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:16.523 [73/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:16.523 [74/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:16.523 [75/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:16.523 [76/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:16.523 [77/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:16.523 [78/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:16.523 [79/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:16.523 [80/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:16.523 [81/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:16.523 [82/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:16.523 [83/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:16.523 [84/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:16.523 [85/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:16.523 [86/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:16.523 [87/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:16.523 [88/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:16.523 [89/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:16.523 [90/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:16.523 [91/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:16.523 [92/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:16.523 [93/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.523 [94/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:16.523 [95/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:16.523 [96/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:16.523 [97/264] Linking static target lib/librte_ring.a 00:02:16.523 [98/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:16.523 [99/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:16.523 [100/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:16.523 [101/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:16.523 [102/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:16.523 [103/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:16.523 [104/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:16.523 [105/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:16.523 [106/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:16.523 [107/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:16.523 [108/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:16.523 [109/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:16.523 [110/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:16.523 [111/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:16.523 [112/264] Linking static target lib/librte_timer.a 00:02:16.781 [113/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:16.781 [114/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:16.781 [115/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:16.781 [116/264] Linking static target lib/librte_cmdline.a 00:02:16.781 [117/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.781 [118/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:16.781 [119/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:16.781 [120/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:16.781 [121/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:16.781 [122/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:16.781 [123/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:16.781 [124/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:16.781 [125/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:16.781 [126/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:16.781 [127/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:16.781 [128/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:16.781 [129/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:16.781 [130/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.781 [131/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:16.781 [132/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:16.781 [133/264] Linking static target lib/librte_rcu.a 00:02:16.781 [134/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:16.781 [135/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:16.781 [136/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:16.781 [137/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:16.781 [138/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:16.781 [139/264] Linking target lib/librte_log.so.24.0 00:02:16.781 [140/264] Linking static target lib/librte_net.a 00:02:16.781 [141/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:16.781 [142/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:16.781 [143/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:16.781 [144/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:16.781 [145/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:16.781 [146/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:16.781 [147/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:16.781 [148/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:16.781 [149/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:16.781 [150/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:16.781 [151/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:16.781 [152/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:16.781 [153/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:16.781 [154/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:16.781 [155/264] Linking static target lib/librte_mempool.a 00:02:16.781 [156/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:16.781 [157/264] Linking static target lib/librte_eal.a 00:02:16.781 [158/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.781 [159/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:16.781 [160/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:16.781 [161/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:16.781 [162/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:16.781 [163/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:16.781 [164/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:16.781 [165/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:16.781 [166/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:16.781 [167/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:16.781 [168/264] Linking static target lib/librte_dmadev.a 00:02:16.781 [169/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:16.781 [170/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:16.781 [171/264] Linking static target lib/librte_power.a 00:02:16.781 [172/264] Linking static target lib/librte_compressdev.a 00:02:16.781 [173/264] Linking target lib/librte_kvargs.so.24.0 00:02:16.781 [174/264] Linking target lib/librte_telemetry.so.24.0 00:02:16.781 [175/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:16.781 [176/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:16.781 [177/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:16.781 [178/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:17.038 [179/264] Linking static target lib/librte_reorder.a 00:02:17.038 [180/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.038 [181/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.038 [182/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:17.038 [183/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.038 [184/264] Linking static target drivers/librte_bus_vdev.a 00:02:17.038 [185/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.039 [186/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:17.039 [187/264] Linking static target lib/librte_security.a 00:02:17.039 [188/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:17.039 [189/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:17.039 [190/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:17.039 [191/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:17.039 [192/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:17.039 [193/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.039 [194/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:17.039 [195/264] Linking static target drivers/librte_bus_pci.a 00:02:17.039 [196/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:17.039 [197/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:17.039 [198/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:17.039 [199/264] Linking static target lib/librte_hash.a 00:02:17.039 [200/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:17.039 [201/264] Linking static target lib/librte_mbuf.a 00:02:17.039 [202/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.039 [203/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.039 [204/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.039 [205/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:17.296 [206/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:17.296 [207/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.296 [208/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:17.296 [209/264] Linking static target drivers/librte_mempool_ring.a 00:02:17.296 [210/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.296 [211/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.296 [212/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.296 [213/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.296 [214/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.296 [215/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.553 [216/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:17.553 [217/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:17.553 [218/264] Linking static target lib/librte_cryptodev.a 00:02:17.553 [219/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.553 [220/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.118 [221/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:18.118 [222/264] Linking static target lib/librte_ethdev.a 00:02:18.375 [223/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:18.939 [224/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.836 [225/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:20.836 [226/264] Linking static target lib/librte_vhost.a 00:02:22.208 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.576 [228/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.576 [229/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.576 [230/264] Linking target lib/librte_eal.so.24.0 00:02:23.576 [231/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:23.576 [232/264] Linking target lib/librte_ring.so.24.0 00:02:23.576 [233/264] Linking target lib/librte_meter.so.24.0 00:02:23.576 [234/264] Linking target lib/librte_timer.so.24.0 00:02:23.576 [235/264] Linking target drivers/librte_bus_vdev.so.24.0 00:02:23.576 [236/264] Linking target lib/librte_pci.so.24.0 00:02:23.576 [237/264] Linking target lib/librte_dmadev.so.24.0 00:02:23.576 [238/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:23.576 [239/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:23.576 [240/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:23.576 [241/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:23.576 [242/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:23.576 [243/264] Linking target lib/librte_rcu.so.24.0 00:02:23.833 [244/264] Linking target lib/librte_mempool.so.24.0 00:02:23.833 [245/264] Linking target drivers/librte_bus_pci.so.24.0 00:02:23.833 [246/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:23.833 [247/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:23.833 [248/264] Linking target lib/librte_mbuf.so.24.0 00:02:23.833 [249/264] Linking target drivers/librte_mempool_ring.so.24.0 00:02:23.833 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:23.833 [251/264] Linking target lib/librte_reorder.so.24.0 00:02:23.833 [252/264] Linking target lib/librte_compressdev.so.24.0 00:02:23.833 [253/264] Linking target lib/librte_net.so.24.0 00:02:23.833 [254/264] Linking target lib/librte_cryptodev.so.24.0 00:02:24.091 [255/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:24.091 [256/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:24.091 [257/264] Linking target lib/librte_hash.so.24.0 00:02:24.091 [258/264] Linking target lib/librte_cmdline.so.24.0 00:02:24.091 [259/264] Linking target lib/librte_security.so.24.0 00:02:24.091 [260/264] Linking target lib/librte_ethdev.so.24.0 00:02:24.091 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:24.091 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:24.091 [263/264] Linking target lib/librte_power.so.24.0 00:02:24.348 [264/264] Linking target lib/librte_vhost.so.24.0 00:02:24.348 INFO: autodetecting backend as ninja 00:02:24.348 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp -j 128 00:02:24.913 CC lib/log/log_flags.o 00:02:24.913 CC lib/log/log.o 00:02:24.913 CC lib/log/log_deprecated.o 00:02:24.913 CC lib/ut_mock/mock.o 00:02:24.913 CC lib/ut/ut.o 00:02:24.913 LIB libspdk_log.a 00:02:25.170 SO libspdk_log.so.7.0 00:02:25.170 LIB libspdk_ut_mock.a 00:02:25.170 SO libspdk_ut_mock.so.6.0 00:02:25.170 SYMLINK libspdk_log.so 00:02:25.170 LIB libspdk_ut.a 00:02:25.170 SYMLINK libspdk_ut_mock.so 00:02:25.170 SO libspdk_ut.so.2.0 00:02:25.170 SYMLINK libspdk_ut.so 00:02:25.428 CC lib/util/bit_array.o 00:02:25.428 CC lib/util/cpuset.o 00:02:25.428 CC lib/dma/dma.o 00:02:25.428 CC lib/util/base64.o 00:02:25.428 CC lib/util/crc16.o 00:02:25.428 CXX lib/trace_parser/trace.o 00:02:25.428 CC lib/util/crc32_ieee.o 00:02:25.428 CC lib/util/crc32.o 00:02:25.428 CC lib/util/fd.o 00:02:25.428 CC lib/util/file.o 00:02:25.428 CC lib/util/crc32c.o 00:02:25.428 CC lib/util/crc64.o 00:02:25.428 CC lib/util/dif.o 00:02:25.428 CC lib/util/iov.o 00:02:25.428 CC lib/util/hexlify.o 00:02:25.428 CC lib/util/math.o 00:02:25.428 CC lib/util/pipe.o 00:02:25.428 CC lib/util/string.o 00:02:25.428 CC lib/util/strerror_tls.o 00:02:25.428 CC lib/util/fd_group.o 00:02:25.428 CC lib/util/uuid.o 00:02:25.428 CC lib/ioat/ioat.o 00:02:25.428 CC lib/util/xor.o 00:02:25.428 CC lib/util/zipf.o 00:02:25.428 CC lib/vfio_user/host/vfio_user_pci.o 00:02:25.428 CC lib/vfio_user/host/vfio_user.o 00:02:25.428 LIB libspdk_dma.a 00:02:25.428 SO libspdk_dma.so.4.0 00:02:25.428 SYMLINK libspdk_dma.so 00:02:25.687 LIB libspdk_ioat.a 00:02:25.687 SO libspdk_ioat.so.7.0 00:02:25.687 LIB libspdk_vfio_user.a 00:02:25.687 SYMLINK libspdk_ioat.so 00:02:25.687 SO libspdk_vfio_user.so.5.0 00:02:25.687 SYMLINK libspdk_vfio_user.so 00:02:25.947 LIB libspdk_util.a 00:02:25.947 SO libspdk_util.so.9.0 00:02:26.205 SYMLINK libspdk_util.so 00:02:26.205 LIB libspdk_trace_parser.a 00:02:26.205 SO libspdk_trace_parser.so.5.0 00:02:26.463 CC lib/env_dpdk/env.o 00:02:26.463 CC lib/json/json_write.o 00:02:26.463 CC lib/json/json_parse.o 00:02:26.463 CC lib/env_dpdk/memory.o 00:02:26.463 CC lib/env_dpdk/pci.o 00:02:26.463 CC lib/json/json_util.o 00:02:26.463 CC lib/env_dpdk/init.o 00:02:26.463 CC lib/env_dpdk/pci_ioat.o 00:02:26.463 CC lib/env_dpdk/threads.o 00:02:26.463 CC lib/env_dpdk/pci_virtio.o 00:02:26.463 CC lib/env_dpdk/pci_vmd.o 00:02:26.463 CC lib/env_dpdk/pci_idxd.o 00:02:26.463 CC lib/env_dpdk/pci_event.o 00:02:26.463 CC lib/env_dpdk/sigbus_handler.o 00:02:26.463 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:26.463 CC lib/env_dpdk/pci_dpdk.o 00:02:26.463 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:26.463 CC lib/vmd/vmd.o 00:02:26.463 CC lib/rdma/common.o 00:02:26.463 CC lib/vmd/led.o 00:02:26.463 CC lib/rdma/rdma_verbs.o 00:02:26.463 CC lib/idxd/idxd.o 00:02:26.463 CC lib/idxd/idxd_user.o 00:02:26.463 CC lib/conf/conf.o 00:02:26.463 SYMLINK libspdk_trace_parser.so 00:02:26.463 LIB libspdk_conf.a 00:02:26.463 SO libspdk_conf.so.6.0 00:02:26.720 LIB libspdk_json.a 00:02:26.720 SYMLINK libspdk_conf.so 00:02:26.720 SO libspdk_json.so.6.0 00:02:26.720 LIB libspdk_rdma.a 00:02:26.720 SYMLINK libspdk_json.so 00:02:26.720 SO libspdk_rdma.so.6.0 00:02:26.720 SYMLINK libspdk_rdma.so 00:02:26.720 LIB libspdk_idxd.a 00:02:26.979 SO libspdk_idxd.so.12.0 00:02:26.979 LIB libspdk_vmd.a 00:02:26.979 SO libspdk_vmd.so.6.0 00:02:26.979 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:26.979 CC lib/jsonrpc/jsonrpc_server.o 00:02:26.979 CC lib/jsonrpc/jsonrpc_client.o 00:02:26.979 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:26.979 SYMLINK libspdk_idxd.so 00:02:26.979 SYMLINK libspdk_vmd.so 00:02:27.237 LIB libspdk_jsonrpc.a 00:02:27.237 SO libspdk_jsonrpc.so.6.0 00:02:27.237 SYMLINK libspdk_jsonrpc.so 00:02:27.496 CC lib/rpc/rpc.o 00:02:27.496 LIB libspdk_rpc.a 00:02:27.496 SO libspdk_rpc.so.6.0 00:02:27.756 SYMLINK libspdk_rpc.so 00:02:27.756 LIB libspdk_env_dpdk.a 00:02:27.756 CC lib/notify/notify_rpc.o 00:02:27.756 CC lib/notify/notify.o 00:02:27.756 CC lib/keyring/keyring.o 00:02:27.756 CC lib/keyring/keyring_rpc.o 00:02:27.756 CC lib/trace/trace.o 00:02:28.015 CC lib/trace/trace_flags.o 00:02:28.015 CC lib/trace/trace_rpc.o 00:02:28.015 SO libspdk_env_dpdk.so.14.0 00:02:28.015 LIB libspdk_notify.a 00:02:28.015 SO libspdk_notify.so.6.0 00:02:28.015 SYMLINK libspdk_env_dpdk.so 00:02:28.015 SYMLINK libspdk_notify.so 00:02:28.015 LIB libspdk_keyring.a 00:02:28.015 SO libspdk_keyring.so.1.0 00:02:28.272 LIB libspdk_trace.a 00:02:28.272 SO libspdk_trace.so.10.0 00:02:28.272 SYMLINK libspdk_keyring.so 00:02:28.272 SYMLINK libspdk_trace.so 00:02:28.532 CC lib/thread/thread.o 00:02:28.532 CC lib/thread/iobuf.o 00:02:28.532 CC lib/sock/sock.o 00:02:28.532 CC lib/sock/sock_rpc.o 00:02:28.791 LIB libspdk_sock.a 00:02:28.791 SO libspdk_sock.so.9.0 00:02:29.049 SYMLINK libspdk_sock.so 00:02:29.307 CC lib/nvme/nvme_fabric.o 00:02:29.308 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:29.308 CC lib/nvme/nvme_ctrlr.o 00:02:29.308 CC lib/nvme/nvme_ns_cmd.o 00:02:29.308 CC lib/nvme/nvme_pcie.o 00:02:29.308 CC lib/nvme/nvme_ns.o 00:02:29.308 CC lib/nvme/nvme.o 00:02:29.308 CC lib/nvme/nvme_pcie_common.o 00:02:29.308 CC lib/nvme/nvme_transport.o 00:02:29.308 CC lib/nvme/nvme_qpair.o 00:02:29.308 CC lib/nvme/nvme_quirks.o 00:02:29.308 CC lib/nvme/nvme_discovery.o 00:02:29.308 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:29.308 CC lib/nvme/nvme_tcp.o 00:02:29.308 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:29.308 CC lib/nvme/nvme_opal.o 00:02:29.308 CC lib/nvme/nvme_poll_group.o 00:02:29.308 CC lib/nvme/nvme_io_msg.o 00:02:29.308 CC lib/nvme/nvme_zns.o 00:02:29.308 CC lib/nvme/nvme_auth.o 00:02:29.308 CC lib/nvme/nvme_stubs.o 00:02:29.308 CC lib/nvme/nvme_rdma.o 00:02:29.308 CC lib/nvme/nvme_cuse.o 00:02:29.565 LIB libspdk_thread.a 00:02:29.565 SO libspdk_thread.so.10.0 00:02:29.565 SYMLINK libspdk_thread.so 00:02:29.823 CC lib/virtio/virtio_vfio_user.o 00:02:29.823 CC lib/virtio/virtio.o 00:02:29.823 CC lib/virtio/virtio_vhost_user.o 00:02:29.823 CC lib/virtio/virtio_pci.o 00:02:29.823 CC lib/accel/accel_rpc.o 00:02:29.823 CC lib/accel/accel.o 00:02:29.823 CC lib/accel/accel_sw.o 00:02:29.823 CC lib/init/rpc.o 00:02:29.823 CC lib/init/subsystem_rpc.o 00:02:29.823 CC lib/init/json_config.o 00:02:29.823 CC lib/init/subsystem.o 00:02:29.823 CC lib/blob/request.o 00:02:29.823 CC lib/blob/blobstore.o 00:02:29.823 CC lib/blob/zeroes.o 00:02:29.823 CC lib/blob/blob_bs_dev.o 00:02:30.081 LIB libspdk_init.a 00:02:30.081 SO libspdk_init.so.5.0 00:02:30.081 SYMLINK libspdk_init.so 00:02:30.339 LIB libspdk_virtio.a 00:02:30.339 SO libspdk_virtio.so.7.0 00:02:30.339 SYMLINK libspdk_virtio.so 00:02:30.339 CC lib/event/reactor.o 00:02:30.339 CC lib/event/app.o 00:02:30.339 CC lib/event/app_rpc.o 00:02:30.339 CC lib/event/log_rpc.o 00:02:30.339 CC lib/event/scheduler_static.o 00:02:30.905 LIB libspdk_nvme.a 00:02:30.905 SO libspdk_nvme.so.13.0 00:02:30.905 LIB libspdk_event.a 00:02:30.905 SO libspdk_event.so.13.0 00:02:30.905 SYMLINK libspdk_event.so 00:02:30.905 LIB libspdk_accel.a 00:02:30.905 SO libspdk_accel.so.15.0 00:02:31.163 SYMLINK libspdk_accel.so 00:02:31.163 SYMLINK libspdk_nvme.so 00:02:31.421 CC lib/bdev/bdev_rpc.o 00:02:31.421 CC lib/bdev/bdev.o 00:02:31.421 CC lib/bdev/bdev_zone.o 00:02:31.421 CC lib/bdev/scsi_nvme.o 00:02:31.421 CC lib/bdev/part.o 00:02:32.358 LIB libspdk_blob.a 00:02:32.358 SO libspdk_blob.so.11.0 00:02:32.358 SYMLINK libspdk_blob.so 00:02:32.617 CC lib/lvol/lvol.o 00:02:32.617 CC lib/blobfs/tree.o 00:02:32.617 CC lib/blobfs/blobfs.o 00:02:33.555 LIB libspdk_blobfs.a 00:02:33.555 SO libspdk_blobfs.so.10.0 00:02:33.815 LIB libspdk_lvol.a 00:02:33.815 SYMLINK libspdk_blobfs.so 00:02:33.815 SO libspdk_lvol.so.10.0 00:02:33.815 SYMLINK libspdk_lvol.so 00:02:34.074 LIB libspdk_bdev.a 00:02:34.074 SO libspdk_bdev.so.15.0 00:02:34.074 SYMLINK libspdk_bdev.so 00:02:34.333 CC lib/ftl/ftl_core.o 00:02:34.333 CC lib/nvmf/ctrlr_bdev.o 00:02:34.333 CC lib/nvmf/ctrlr.o 00:02:34.333 CC lib/ftl/ftl_init.o 00:02:34.333 CC lib/nvmf/subsystem.o 00:02:34.333 CC lib/nvmf/ctrlr_discovery.o 00:02:34.333 CC lib/ftl/ftl_layout.o 00:02:34.333 CC lib/ftl/ftl_debug.o 00:02:34.333 CC lib/ftl/ftl_io.o 00:02:34.333 CC lib/nvmf/nvmf.o 00:02:34.333 CC lib/ftl/ftl_sb.o 00:02:34.333 CC lib/ftl/ftl_l2p.o 00:02:34.333 CC lib/ftl/ftl_l2p_flat.o 00:02:34.333 CC lib/nvmf/nvmf_rpc.o 00:02:34.333 CC lib/nvmf/transport.o 00:02:34.333 CC lib/nvmf/tcp.o 00:02:34.333 CC lib/ftl/ftl_nv_cache.o 00:02:34.333 CC lib/nvmf/stubs.o 00:02:34.333 CC lib/nvmf/rdma.o 00:02:34.333 CC lib/nvmf/mdns_server.o 00:02:34.333 CC lib/ftl/ftl_band.o 00:02:34.333 CC lib/nvmf/auth.o 00:02:34.333 CC lib/ftl/ftl_rq.o 00:02:34.333 CC lib/ftl/ftl_band_ops.o 00:02:34.333 CC lib/ftl/ftl_reloc.o 00:02:34.333 CC lib/ftl/ftl_writer.o 00:02:34.333 CC lib/ftl/ftl_l2p_cache.o 00:02:34.333 CC lib/ftl/mngt/ftl_mngt.o 00:02:34.333 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:34.333 CC lib/ftl/ftl_p2l.o 00:02:34.333 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:34.333 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:34.333 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:34.333 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:34.333 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:34.333 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:34.333 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:34.333 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:34.333 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:34.333 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:34.333 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:34.333 CC lib/ftl/utils/ftl_conf.o 00:02:34.333 CC lib/ftl/utils/ftl_mempool.o 00:02:34.333 CC lib/ftl/utils/ftl_bitmap.o 00:02:34.333 CC lib/ftl/utils/ftl_md.o 00:02:34.333 CC lib/ftl/utils/ftl_property.o 00:02:34.333 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:34.333 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:34.333 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:34.333 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:34.333 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:34.333 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:34.333 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:34.333 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:34.333 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:34.333 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:34.333 CC lib/ftl/base/ftl_base_dev.o 00:02:34.333 CC lib/ftl/base/ftl_base_bdev.o 00:02:34.333 CC lib/ftl/ftl_trace.o 00:02:34.333 CC lib/scsi/dev.o 00:02:34.333 CC lib/scsi/lun.o 00:02:34.333 CC lib/scsi/port.o 00:02:34.333 CC lib/scsi/scsi.o 00:02:34.333 CC lib/scsi/scsi_pr.o 00:02:34.333 CC lib/scsi/scsi_rpc.o 00:02:34.333 CC lib/scsi/scsi_bdev.o 00:02:34.333 CC lib/scsi/task.o 00:02:34.333 CC lib/ublk/ublk.o 00:02:34.333 CC lib/ublk/ublk_rpc.o 00:02:34.333 CC lib/nbd/nbd.o 00:02:34.333 CC lib/nbd/nbd_rpc.o 00:02:35.267 LIB libspdk_nbd.a 00:02:35.267 SO libspdk_nbd.so.7.0 00:02:35.267 LIB libspdk_scsi.a 00:02:35.267 SO libspdk_scsi.so.9.0 00:02:35.267 SYMLINK libspdk_nbd.so 00:02:35.267 LIB libspdk_ublk.a 00:02:35.267 SYMLINK libspdk_scsi.so 00:02:35.267 SO libspdk_ublk.so.3.0 00:02:35.525 LIB libspdk_ftl.a 00:02:35.525 SYMLINK libspdk_ublk.so 00:02:35.525 SO libspdk_ftl.so.9.0 00:02:35.525 CC lib/iscsi/conn.o 00:02:35.525 CC lib/iscsi/init_grp.o 00:02:35.525 CC lib/iscsi/md5.o 00:02:35.525 CC lib/iscsi/param.o 00:02:35.525 CC lib/iscsi/iscsi.o 00:02:35.525 CC lib/iscsi/iscsi_subsystem.o 00:02:35.525 CC lib/iscsi/portal_grp.o 00:02:35.525 CC lib/iscsi/tgt_node.o 00:02:35.525 CC lib/iscsi/task.o 00:02:35.525 CC lib/iscsi/iscsi_rpc.o 00:02:35.525 CC lib/vhost/vhost.o 00:02:35.525 CC lib/vhost/vhost_scsi.o 00:02:35.525 CC lib/vhost/vhost_rpc.o 00:02:35.525 CC lib/vhost/vhost_blk.o 00:02:35.525 CC lib/vhost/rte_vhost_user.o 00:02:35.784 SYMLINK libspdk_ftl.so 00:02:36.350 LIB libspdk_nvmf.a 00:02:36.350 SO libspdk_nvmf.so.18.0 00:02:36.609 SYMLINK libspdk_nvmf.so 00:02:36.609 LIB libspdk_vhost.a 00:02:36.609 SO libspdk_vhost.so.8.0 00:02:36.869 SYMLINK libspdk_vhost.so 00:02:36.869 LIB libspdk_iscsi.a 00:02:36.869 SO libspdk_iscsi.so.8.0 00:02:37.127 SYMLINK libspdk_iscsi.so 00:02:37.385 CC module/env_dpdk/env_dpdk_rpc.o 00:02:37.385 CC module/blob/bdev/blob_bdev.o 00:02:37.385 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:37.385 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:37.385 CC module/accel/iaa/accel_iaa.o 00:02:37.385 CC module/sock/posix/posix.o 00:02:37.385 CC module/keyring/file/keyring_rpc.o 00:02:37.644 CC module/accel/iaa/accel_iaa_rpc.o 00:02:37.644 CC module/keyring/file/keyring.o 00:02:37.644 CC module/scheduler/gscheduler/gscheduler.o 00:02:37.644 CC module/accel/error/accel_error_rpc.o 00:02:37.644 CC module/accel/error/accel_error.o 00:02:37.644 CC module/accel/dsa/accel_dsa_rpc.o 00:02:37.644 CC module/accel/dsa/accel_dsa.o 00:02:37.644 CC module/accel/ioat/accel_ioat_rpc.o 00:02:37.644 CC module/accel/ioat/accel_ioat.o 00:02:37.644 LIB libspdk_env_dpdk_rpc.a 00:02:37.644 SO libspdk_env_dpdk_rpc.so.6.0 00:02:37.644 SYMLINK libspdk_env_dpdk_rpc.so 00:02:37.644 LIB libspdk_scheduler_gscheduler.a 00:02:37.644 LIB libspdk_scheduler_dpdk_governor.a 00:02:37.644 LIB libspdk_keyring_file.a 00:02:37.644 LIB libspdk_accel_ioat.a 00:02:37.644 SO libspdk_scheduler_gscheduler.so.4.0 00:02:37.644 SO libspdk_keyring_file.so.1.0 00:02:37.644 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:37.644 SO libspdk_accel_ioat.so.6.0 00:02:37.644 LIB libspdk_scheduler_dynamic.a 00:02:37.644 LIB libspdk_accel_error.a 00:02:37.644 LIB libspdk_accel_iaa.a 00:02:37.644 SYMLINK libspdk_scheduler_gscheduler.so 00:02:37.644 SO libspdk_scheduler_dynamic.so.4.0 00:02:37.644 SO libspdk_accel_error.so.2.0 00:02:37.644 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:37.644 SYMLINK libspdk_keyring_file.so 00:02:37.644 SO libspdk_accel_iaa.so.3.0 00:02:37.901 LIB libspdk_accel_dsa.a 00:02:37.901 SYMLINK libspdk_accel_ioat.so 00:02:37.901 LIB libspdk_blob_bdev.a 00:02:37.901 SYMLINK libspdk_scheduler_dynamic.so 00:02:37.901 SO libspdk_accel_dsa.so.5.0 00:02:37.901 SYMLINK libspdk_accel_error.so 00:02:37.901 SO libspdk_blob_bdev.so.11.0 00:02:37.901 SYMLINK libspdk_accel_iaa.so 00:02:37.901 SYMLINK libspdk_accel_dsa.so 00:02:37.901 SYMLINK libspdk_blob_bdev.so 00:02:38.160 LIB libspdk_sock_posix.a 00:02:38.160 SO libspdk_sock_posix.so.6.0 00:02:38.160 CC module/bdev/error/vbdev_error.o 00:02:38.160 CC module/bdev/malloc/bdev_malloc.o 00:02:38.160 CC module/bdev/error/vbdev_error_rpc.o 00:02:38.160 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:38.160 CC module/bdev/lvol/vbdev_lvol.o 00:02:38.160 CC module/bdev/split/vbdev_split.o 00:02:38.160 CC module/bdev/split/vbdev_split_rpc.o 00:02:38.160 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:38.160 CC module/blobfs/bdev/blobfs_bdev.o 00:02:38.160 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:38.160 CC module/bdev/gpt/gpt.o 00:02:38.160 CC module/bdev/gpt/vbdev_gpt.o 00:02:38.160 CC module/bdev/nvme/bdev_nvme.o 00:02:38.160 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:38.160 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:38.160 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:38.160 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:38.160 CC module/bdev/aio/bdev_aio.o 00:02:38.160 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:38.160 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:38.160 CC module/bdev/iscsi/bdev_iscsi.o 00:02:38.160 CC module/bdev/aio/bdev_aio_rpc.o 00:02:38.160 CC module/bdev/nvme/bdev_mdns_client.o 00:02:38.160 CC module/bdev/nvme/nvme_rpc.o 00:02:38.160 CC module/bdev/null/bdev_null.o 00:02:38.160 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:38.160 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:38.160 CC module/bdev/nvme/vbdev_opal.o 00:02:38.160 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:38.160 CC module/bdev/null/bdev_null_rpc.o 00:02:38.160 CC module/bdev/raid/bdev_raid_rpc.o 00:02:38.160 CC module/bdev/raid/bdev_raid.o 00:02:38.160 CC module/bdev/raid/bdev_raid_sb.o 00:02:38.160 CC module/bdev/raid/raid0.o 00:02:38.160 CC module/bdev/raid/raid1.o 00:02:38.160 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:38.160 CC module/bdev/delay/vbdev_delay.o 00:02:38.160 CC module/bdev/raid/concat.o 00:02:38.160 CC module/bdev/ftl/bdev_ftl.o 00:02:38.160 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:38.160 CC module/bdev/passthru/vbdev_passthru.o 00:02:38.160 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:38.160 SYMLINK libspdk_sock_posix.so 00:02:38.418 LIB libspdk_blobfs_bdev.a 00:02:38.418 LIB libspdk_bdev_split.a 00:02:38.418 SO libspdk_blobfs_bdev.so.6.0 00:02:38.418 LIB libspdk_bdev_error.a 00:02:38.418 SO libspdk_bdev_split.so.6.0 00:02:38.676 SO libspdk_bdev_error.so.6.0 00:02:38.676 LIB libspdk_bdev_gpt.a 00:02:38.676 LIB libspdk_bdev_null.a 00:02:38.676 SYMLINK libspdk_blobfs_bdev.so 00:02:38.676 LIB libspdk_bdev_passthru.a 00:02:38.676 LIB libspdk_bdev_aio.a 00:02:38.676 SYMLINK libspdk_bdev_split.so 00:02:38.676 SO libspdk_bdev_gpt.so.6.0 00:02:38.676 LIB libspdk_bdev_iscsi.a 00:02:38.676 SO libspdk_bdev_null.so.6.0 00:02:38.676 SO libspdk_bdev_passthru.so.6.0 00:02:38.676 SYMLINK libspdk_bdev_error.so 00:02:38.676 SO libspdk_bdev_aio.so.6.0 00:02:38.676 SO libspdk_bdev_iscsi.so.6.0 00:02:38.676 LIB libspdk_bdev_ftl.a 00:02:38.676 SYMLINK libspdk_bdev_gpt.so 00:02:38.676 LIB libspdk_bdev_malloc.a 00:02:38.676 SYMLINK libspdk_bdev_null.so 00:02:38.676 SO libspdk_bdev_ftl.so.6.0 00:02:38.676 SYMLINK libspdk_bdev_passthru.so 00:02:38.676 LIB libspdk_bdev_delay.a 00:02:38.676 SYMLINK libspdk_bdev_aio.so 00:02:38.676 LIB libspdk_bdev_zone_block.a 00:02:38.676 SYMLINK libspdk_bdev_iscsi.so 00:02:38.676 SO libspdk_bdev_malloc.so.6.0 00:02:38.676 SO libspdk_bdev_delay.so.6.0 00:02:38.676 SO libspdk_bdev_zone_block.so.6.0 00:02:38.676 SYMLINK libspdk_bdev_ftl.so 00:02:38.676 SYMLINK libspdk_bdev_malloc.so 00:02:38.676 SYMLINK libspdk_bdev_delay.so 00:02:38.676 SYMLINK libspdk_bdev_zone_block.so 00:02:38.933 LIB libspdk_bdev_lvol.a 00:02:38.933 SO libspdk_bdev_lvol.so.6.0 00:02:38.933 LIB libspdk_bdev_virtio.a 00:02:38.933 SO libspdk_bdev_virtio.so.6.0 00:02:38.933 SYMLINK libspdk_bdev_lvol.so 00:02:38.933 SYMLINK libspdk_bdev_virtio.so 00:02:39.499 LIB libspdk_bdev_raid.a 00:02:39.499 SO libspdk_bdev_raid.so.6.0 00:02:39.499 SYMLINK libspdk_bdev_raid.so 00:02:40.065 LIB libspdk_bdev_nvme.a 00:02:40.065 SO libspdk_bdev_nvme.so.7.0 00:02:40.065 SYMLINK libspdk_bdev_nvme.so 00:02:40.632 CC module/event/subsystems/vmd/vmd.o 00:02:40.632 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:40.632 CC module/event/subsystems/scheduler/scheduler.o 00:02:40.632 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:40.632 CC module/event/subsystems/sock/sock.o 00:02:40.632 CC module/event/subsystems/keyring/keyring.o 00:02:40.632 CC module/event/subsystems/iobuf/iobuf.o 00:02:40.632 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:40.632 LIB libspdk_event_vhost_blk.a 00:02:40.632 LIB libspdk_event_keyring.a 00:02:40.632 LIB libspdk_event_scheduler.a 00:02:40.632 SO libspdk_event_vhost_blk.so.3.0 00:02:40.632 LIB libspdk_event_vmd.a 00:02:40.632 LIB libspdk_event_sock.a 00:02:40.632 SO libspdk_event_keyring.so.1.0 00:02:40.632 SO libspdk_event_scheduler.so.4.0 00:02:40.632 SO libspdk_event_vmd.so.6.0 00:02:40.632 SO libspdk_event_sock.so.5.0 00:02:40.632 LIB libspdk_event_iobuf.a 00:02:40.632 SYMLINK libspdk_event_vhost_blk.so 00:02:40.632 SO libspdk_event_iobuf.so.3.0 00:02:40.632 SYMLINK libspdk_event_keyring.so 00:02:40.990 SYMLINK libspdk_event_vmd.so 00:02:40.990 SYMLINK libspdk_event_scheduler.so 00:02:40.990 SYMLINK libspdk_event_sock.so 00:02:40.990 SYMLINK libspdk_event_iobuf.so 00:02:40.990 CC module/event/subsystems/accel/accel.o 00:02:41.250 LIB libspdk_event_accel.a 00:02:41.250 SO libspdk_event_accel.so.6.0 00:02:41.250 SYMLINK libspdk_event_accel.so 00:02:41.509 CC module/event/subsystems/bdev/bdev.o 00:02:41.509 LIB libspdk_event_bdev.a 00:02:41.767 SO libspdk_event_bdev.so.6.0 00:02:41.767 SYMLINK libspdk_event_bdev.so 00:02:42.025 CC module/event/subsystems/scsi/scsi.o 00:02:42.025 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:42.025 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:42.025 CC module/event/subsystems/ublk/ublk.o 00:02:42.025 CC module/event/subsystems/nbd/nbd.o 00:02:42.025 LIB libspdk_event_nbd.a 00:02:42.025 LIB libspdk_event_scsi.a 00:02:42.025 LIB libspdk_event_ublk.a 00:02:42.025 SO libspdk_event_scsi.so.6.0 00:02:42.025 SO libspdk_event_nbd.so.6.0 00:02:42.025 SO libspdk_event_ublk.so.3.0 00:02:42.025 SYMLINK libspdk_event_scsi.so 00:02:42.025 SYMLINK libspdk_event_nbd.so 00:02:42.284 SYMLINK libspdk_event_ublk.so 00:02:42.284 LIB libspdk_event_nvmf.a 00:02:42.284 SO libspdk_event_nvmf.so.6.0 00:02:42.284 SYMLINK libspdk_event_nvmf.so 00:02:42.284 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:42.284 CC module/event/subsystems/iscsi/iscsi.o 00:02:42.542 LIB libspdk_event_vhost_scsi.a 00:02:42.542 SO libspdk_event_vhost_scsi.so.3.0 00:02:42.542 LIB libspdk_event_iscsi.a 00:02:42.542 SYMLINK libspdk_event_vhost_scsi.so 00:02:42.543 SO libspdk_event_iscsi.so.6.0 00:02:42.543 SYMLINK libspdk_event_iscsi.so 00:02:42.801 SO libspdk.so.6.0 00:02:42.801 SYMLINK libspdk.so 00:02:43.068 TEST_HEADER include/spdk/accel_module.h 00:02:43.068 TEST_HEADER include/spdk/accel.h 00:02:43.068 TEST_HEADER include/spdk/barrier.h 00:02:43.068 TEST_HEADER include/spdk/assert.h 00:02:43.068 TEST_HEADER include/spdk/bdev.h 00:02:43.068 TEST_HEADER include/spdk/base64.h 00:02:43.068 CC test/rpc_client/rpc_client_test.o 00:02:43.068 TEST_HEADER include/spdk/bdev_module.h 00:02:43.068 TEST_HEADER include/spdk/bit_array.h 00:02:43.068 TEST_HEADER include/spdk/bdev_zone.h 00:02:43.068 TEST_HEADER include/spdk/bit_pool.h 00:02:43.068 TEST_HEADER include/spdk/blob_bdev.h 00:02:43.068 TEST_HEADER include/spdk/blob.h 00:02:43.068 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:43.068 TEST_HEADER include/spdk/blobfs.h 00:02:43.068 CC app/spdk_lspci/spdk_lspci.o 00:02:43.068 TEST_HEADER include/spdk/conf.h 00:02:43.068 TEST_HEADER include/spdk/config.h 00:02:43.068 TEST_HEADER include/spdk/cpuset.h 00:02:43.068 TEST_HEADER include/spdk/crc16.h 00:02:43.068 TEST_HEADER include/spdk/crc32.h 00:02:43.068 TEST_HEADER include/spdk/dif.h 00:02:43.068 TEST_HEADER include/spdk/dma.h 00:02:43.068 TEST_HEADER include/spdk/crc64.h 00:02:43.068 CXX app/trace/trace.o 00:02:43.068 TEST_HEADER include/spdk/endian.h 00:02:43.068 TEST_HEADER include/spdk/env_dpdk.h 00:02:43.069 TEST_HEADER include/spdk/env.h 00:02:43.069 TEST_HEADER include/spdk/event.h 00:02:43.069 TEST_HEADER include/spdk/fd_group.h 00:02:43.069 TEST_HEADER include/spdk/fd.h 00:02:43.069 TEST_HEADER include/spdk/file.h 00:02:43.069 CC app/spdk_top/spdk_top.o 00:02:43.069 TEST_HEADER include/spdk/ftl.h 00:02:43.069 CC app/spdk_nvme_discover/discovery_aer.o 00:02:43.069 TEST_HEADER include/spdk/gpt_spec.h 00:02:43.069 CC app/spdk_nvme_identify/identify.o 00:02:43.069 CC app/trace_record/trace_record.o 00:02:43.069 TEST_HEADER include/spdk/histogram_data.h 00:02:43.069 TEST_HEADER include/spdk/hexlify.h 00:02:43.069 TEST_HEADER include/spdk/idxd.h 00:02:43.069 TEST_HEADER include/spdk/idxd_spec.h 00:02:43.069 TEST_HEADER include/spdk/init.h 00:02:43.069 TEST_HEADER include/spdk/ioat.h 00:02:43.069 CC app/spdk_nvme_perf/perf.o 00:02:43.069 TEST_HEADER include/spdk/ioat_spec.h 00:02:43.069 TEST_HEADER include/spdk/iscsi_spec.h 00:02:43.069 TEST_HEADER include/spdk/jsonrpc.h 00:02:43.069 TEST_HEADER include/spdk/json.h 00:02:43.069 TEST_HEADER include/spdk/keyring_module.h 00:02:43.069 TEST_HEADER include/spdk/keyring.h 00:02:43.069 TEST_HEADER include/spdk/likely.h 00:02:43.069 TEST_HEADER include/spdk/lvol.h 00:02:43.069 TEST_HEADER include/spdk/log.h 00:02:43.069 TEST_HEADER include/spdk/mmio.h 00:02:43.069 TEST_HEADER include/spdk/memory.h 00:02:43.069 TEST_HEADER include/spdk/nbd.h 00:02:43.069 TEST_HEADER include/spdk/nvme.h 00:02:43.069 TEST_HEADER include/spdk/nvme_intel.h 00:02:43.069 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:43.069 TEST_HEADER include/spdk/notify.h 00:02:43.069 TEST_HEADER include/spdk/nvme_spec.h 00:02:43.069 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:43.069 TEST_HEADER include/spdk/nvme_zns.h 00:02:43.069 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:43.069 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:43.069 TEST_HEADER include/spdk/nvmf.h 00:02:43.069 CC app/iscsi_tgt/iscsi_tgt.o 00:02:43.069 TEST_HEADER include/spdk/nvmf_spec.h 00:02:43.069 TEST_HEADER include/spdk/opal.h 00:02:43.069 TEST_HEADER include/spdk/nvmf_transport.h 00:02:43.069 TEST_HEADER include/spdk/opal_spec.h 00:02:43.069 TEST_HEADER include/spdk/pci_ids.h 00:02:43.069 TEST_HEADER include/spdk/pipe.h 00:02:43.069 TEST_HEADER include/spdk/queue.h 00:02:43.069 TEST_HEADER include/spdk/reduce.h 00:02:43.069 TEST_HEADER include/spdk/scheduler.h 00:02:43.069 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:43.069 TEST_HEADER include/spdk/rpc.h 00:02:43.069 TEST_HEADER include/spdk/scsi_spec.h 00:02:43.069 TEST_HEADER include/spdk/scsi.h 00:02:43.069 TEST_HEADER include/spdk/sock.h 00:02:43.069 TEST_HEADER include/spdk/stdinc.h 00:02:43.069 TEST_HEADER include/spdk/thread.h 00:02:43.069 TEST_HEADER include/spdk/string.h 00:02:43.069 CC app/spdk_dd/spdk_dd.o 00:02:43.069 TEST_HEADER include/spdk/trace_parser.h 00:02:43.069 TEST_HEADER include/spdk/tree.h 00:02:43.069 CC app/vhost/vhost.o 00:02:43.069 TEST_HEADER include/spdk/trace.h 00:02:43.069 TEST_HEADER include/spdk/ublk.h 00:02:43.069 TEST_HEADER include/spdk/uuid.h 00:02:43.069 CC app/spdk_tgt/spdk_tgt.o 00:02:43.069 TEST_HEADER include/spdk/util.h 00:02:43.069 TEST_HEADER include/spdk/version.h 00:02:43.069 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:43.069 TEST_HEADER include/spdk/vhost.h 00:02:43.069 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:43.069 CC app/nvmf_tgt/nvmf_main.o 00:02:43.069 TEST_HEADER include/spdk/vmd.h 00:02:43.069 TEST_HEADER include/spdk/zipf.h 00:02:43.069 TEST_HEADER include/spdk/xor.h 00:02:43.069 CXX test/cpp_headers/accel.o 00:02:43.069 CXX test/cpp_headers/accel_module.o 00:02:43.069 CXX test/cpp_headers/assert.o 00:02:43.069 CXX test/cpp_headers/barrier.o 00:02:43.069 CXX test/cpp_headers/base64.o 00:02:43.069 CXX test/cpp_headers/bdev.o 00:02:43.069 CXX test/cpp_headers/bdev_module.o 00:02:43.069 CXX test/cpp_headers/bdev_zone.o 00:02:43.069 CXX test/cpp_headers/bit_array.o 00:02:43.069 CXX test/cpp_headers/bit_pool.o 00:02:43.069 CXX test/cpp_headers/blob_bdev.o 00:02:43.069 CXX test/cpp_headers/blobfs_bdev.o 00:02:43.069 CXX test/cpp_headers/blobfs.o 00:02:43.069 CXX test/cpp_headers/config.o 00:02:43.069 CXX test/cpp_headers/blob.o 00:02:43.069 CXX test/cpp_headers/crc16.o 00:02:43.069 CXX test/cpp_headers/conf.o 00:02:43.069 CXX test/cpp_headers/cpuset.o 00:02:43.069 CXX test/cpp_headers/crc32.o 00:02:43.069 CXX test/cpp_headers/dif.o 00:02:43.069 CXX test/cpp_headers/crc64.o 00:02:43.069 CXX test/cpp_headers/endian.o 00:02:43.069 CXX test/cpp_headers/dma.o 00:02:43.069 CXX test/cpp_headers/env.o 00:02:43.069 CXX test/cpp_headers/event.o 00:02:43.069 CXX test/cpp_headers/env_dpdk.o 00:02:43.069 CXX test/cpp_headers/fd_group.o 00:02:43.069 CXX test/cpp_headers/file.o 00:02:43.069 CXX test/cpp_headers/ftl.o 00:02:43.069 CXX test/cpp_headers/fd.o 00:02:43.069 CXX test/cpp_headers/hexlify.o 00:02:43.069 CXX test/cpp_headers/gpt_spec.o 00:02:43.069 CXX test/cpp_headers/histogram_data.o 00:02:43.069 CXX test/cpp_headers/init.o 00:02:43.069 CXX test/cpp_headers/idxd.o 00:02:43.069 CXX test/cpp_headers/idxd_spec.o 00:02:43.069 CXX test/cpp_headers/ioat_spec.o 00:02:43.069 CXX test/cpp_headers/ioat.o 00:02:43.069 CXX test/cpp_headers/iscsi_spec.o 00:02:43.069 CXX test/cpp_headers/jsonrpc.o 00:02:43.069 CXX test/cpp_headers/json.o 00:02:43.069 CXX test/cpp_headers/keyring.o 00:02:43.069 CXX test/cpp_headers/log.o 00:02:43.069 CXX test/cpp_headers/likely.o 00:02:43.069 CXX test/cpp_headers/keyring_module.o 00:02:43.069 CXX test/cpp_headers/memory.o 00:02:43.069 CXX test/cpp_headers/lvol.o 00:02:43.069 CXX test/cpp_headers/mmio.o 00:02:43.069 CXX test/cpp_headers/nbd.o 00:02:43.069 CXX test/cpp_headers/nvme.o 00:02:43.069 CXX test/cpp_headers/nvme_intel.o 00:02:43.069 CXX test/cpp_headers/nvme_ocssd.o 00:02:43.069 CXX test/cpp_headers/notify.o 00:02:43.334 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:43.334 CXX test/cpp_headers/nvme_spec.o 00:02:43.334 CC test/app/histogram_perf/histogram_perf.o 00:02:43.334 CC examples/accel/perf/accel_perf.o 00:02:43.334 CC test/nvme/e2edp/nvme_dp.o 00:02:43.334 CC app/fio/nvme/fio_plugin.o 00:02:43.334 CC test/app/jsoncat/jsoncat.o 00:02:43.334 CC examples/nvme/hello_world/hello_world.o 00:02:43.334 CC test/env/vtophys/vtophys.o 00:02:43.334 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:43.334 CC examples/nvme/reconnect/reconnect.o 00:02:43.334 CC test/event/reactor/reactor.o 00:02:43.334 CC test/env/memory/memory_ut.o 00:02:43.334 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:43.334 CC test/thread/poller_perf/poller_perf.o 00:02:43.334 CC test/nvme/boot_partition/boot_partition.o 00:02:43.334 CC test/env/pci/pci_ut.o 00:02:43.334 CC test/nvme/reset/reset.o 00:02:43.334 CC examples/nvme/abort/abort.o 00:02:43.334 CC test/nvme/fused_ordering/fused_ordering.o 00:02:43.334 CC examples/util/zipf/zipf.o 00:02:43.334 CC test/event/app_repeat/app_repeat.o 00:02:43.334 CC examples/nvme/arbitration/arbitration.o 00:02:43.334 CC test/app/stub/stub.o 00:02:43.334 CC examples/ioat/perf/perf.o 00:02:43.334 CC test/nvme/fdp/fdp.o 00:02:43.334 CC test/event/reactor_perf/reactor_perf.o 00:02:43.334 CC examples/sock/hello_world/hello_sock.o 00:02:43.334 CC test/nvme/connect_stress/connect_stress.o 00:02:43.334 CC test/nvme/aer/aer.o 00:02:43.334 CC examples/nvme/hotplug/hotplug.o 00:02:43.334 CC app/fio/bdev/fio_plugin.o 00:02:43.334 CC test/nvme/overhead/overhead.o 00:02:43.334 CC test/nvme/sgl/sgl.o 00:02:43.334 CC test/nvme/err_injection/err_injection.o 00:02:43.334 CC test/event/scheduler/scheduler.o 00:02:43.598 CC test/accel/dif/dif.o 00:02:43.598 CC test/nvme/cuse/cuse.o 00:02:43.598 CC examples/ioat/verify/verify.o 00:02:43.598 CC test/nvme/compliance/nvme_compliance.o 00:02:43.598 CC examples/idxd/perf/perf.o 00:02:43.598 CC test/nvme/simple_copy/simple_copy.o 00:02:43.598 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:43.598 CC examples/vmd/led/led.o 00:02:43.598 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:43.598 CC examples/thread/thread/thread_ex.o 00:02:43.598 CC test/event/event_perf/event_perf.o 00:02:43.598 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:43.598 CC test/app/bdev_svc/bdev_svc.o 00:02:43.598 CC test/nvme/reserve/reserve.o 00:02:43.598 CC test/dma/test_dma/test_dma.o 00:02:43.598 CC test/bdev/bdevio/bdevio.o 00:02:43.598 CC test/nvme/startup/startup.o 00:02:43.598 CC test/blobfs/mkfs/mkfs.o 00:02:43.598 CC examples/bdev/bdevperf/bdevperf.o 00:02:43.598 CC examples/vmd/lsvmd/lsvmd.o 00:02:43.598 CC examples/nvmf/nvmf/nvmf.o 00:02:43.598 CC examples/bdev/hello_world/hello_bdev.o 00:02:43.598 CC examples/blob/hello_world/hello_blob.o 00:02:43.598 CC examples/blob/cli/blobcli.o 00:02:43.598 LINK spdk_lspci 00:02:43.857 LINK rpc_client_test 00:02:43.857 LINK nvmf_tgt 00:02:43.857 LINK iscsi_tgt 00:02:43.857 LINK vhost 00:02:44.120 LINK vtophys 00:02:44.120 LINK histogram_perf 00:02:44.120 LINK spdk_tgt 00:02:44.120 LINK spdk_nvme_discover 00:02:44.120 LINK poller_perf 00:02:44.120 LINK interrupt_tgt 00:02:44.120 LINK pmr_persistence 00:02:44.120 LINK reactor 00:02:44.120 LINK event_perf 00:02:44.120 LINK env_dpdk_post_init 00:02:44.120 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:44.120 CC test/env/mem_callbacks/mem_callbacks.o 00:02:44.120 CC test/lvol/esnap/esnap.o 00:02:44.120 LINK jsoncat 00:02:44.120 CXX test/cpp_headers/nvme_zns.o 00:02:44.120 CXX test/cpp_headers/nvmf_cmd.o 00:02:44.120 LINK cmb_copy 00:02:44.120 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:44.120 CXX test/cpp_headers/nvmf.o 00:02:44.120 CXX test/cpp_headers/nvmf_spec.o 00:02:44.120 CXX test/cpp_headers/nvmf_transport.o 00:02:44.120 CXX test/cpp_headers/opal.o 00:02:44.120 CXX test/cpp_headers/opal_spec.o 00:02:44.120 LINK reactor_perf 00:02:44.120 CXX test/cpp_headers/pci_ids.o 00:02:44.120 LINK verify 00:02:44.120 CXX test/cpp_headers/pipe.o 00:02:44.120 CXX test/cpp_headers/queue.o 00:02:44.120 LINK led 00:02:44.120 CXX test/cpp_headers/reduce.o 00:02:44.120 CXX test/cpp_headers/rpc.o 00:02:44.120 CXX test/cpp_headers/scheduler.o 00:02:44.120 LINK reserve 00:02:44.120 CXX test/cpp_headers/scsi_spec.o 00:02:44.120 CXX test/cpp_headers/scsi.o 00:02:44.120 LINK stub 00:02:44.120 CXX test/cpp_headers/sock.o 00:02:44.383 LINK mkfs 00:02:44.383 CXX test/cpp_headers/stdinc.o 00:02:44.383 CXX test/cpp_headers/string.o 00:02:44.383 CXX test/cpp_headers/thread.o 00:02:44.383 LINK hello_world 00:02:44.383 CXX test/cpp_headers/trace.o 00:02:44.383 LINK zipf 00:02:44.383 CXX test/cpp_headers/tree.o 00:02:44.383 CXX test/cpp_headers/trace_parser.o 00:02:44.383 CXX test/cpp_headers/ublk.o 00:02:44.383 LINK lsvmd 00:02:44.383 CXX test/cpp_headers/util.o 00:02:44.383 CXX test/cpp_headers/uuid.o 00:02:44.383 LINK app_repeat 00:02:44.383 CXX test/cpp_headers/version.o 00:02:44.383 LINK hotplug 00:02:44.383 LINK spdk_trace_record 00:02:44.383 CXX test/cpp_headers/vfio_user_pci.o 00:02:44.383 CXX test/cpp_headers/vfio_user_spec.o 00:02:44.383 LINK scheduler 00:02:44.383 LINK nvme_dp 00:02:44.383 CXX test/cpp_headers/vmd.o 00:02:44.383 CXX test/cpp_headers/xor.o 00:02:44.383 CXX test/cpp_headers/vhost.o 00:02:44.383 CXX test/cpp_headers/zipf.o 00:02:44.383 LINK boot_partition 00:02:44.383 LINK doorbell_aers 00:02:44.383 LINK simple_copy 00:02:44.383 LINK connect_stress 00:02:44.383 LINK bdev_svc 00:02:44.383 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:44.383 LINK err_injection 00:02:44.383 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:44.383 LINK nvme_compliance 00:02:44.383 LINK startup 00:02:44.383 LINK spdk_dd 00:02:44.383 LINK nvmf 00:02:44.383 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:44.640 LINK reconnect 00:02:44.640 LINK fused_ordering 00:02:44.640 LINK sgl 00:02:44.641 LINK ioat_perf 00:02:44.641 LINK hello_bdev 00:02:44.641 LINK idxd_perf 00:02:44.641 LINK hello_blob 00:02:44.641 LINK thread 00:02:44.641 LINK hello_sock 00:02:44.641 LINK bdevio 00:02:44.641 LINK reset 00:02:44.641 LINK overhead 00:02:44.641 LINK aer 00:02:44.641 LINK spdk_nvme 00:02:44.641 LINK accel_perf 00:02:44.641 LINK fdp 00:02:44.641 LINK arbitration 00:02:44.641 LINK spdk_bdev 00:02:44.641 LINK dif 00:02:44.898 LINK pci_ut 00:02:44.898 LINK abort 00:02:44.898 LINK spdk_trace 00:02:44.898 LINK nvme_manage 00:02:44.898 LINK test_dma 00:02:44.898 LINK blobcli 00:02:44.898 LINK nvme_fuzz 00:02:44.899 LINK spdk_nvme_perf 00:02:44.899 LINK mem_callbacks 00:02:44.899 LINK memory_ut 00:02:45.157 LINK vhost_fuzz 00:02:45.157 LINK bdevperf 00:02:45.157 LINK spdk_nvme_identify 00:02:45.157 LINK cuse 00:02:45.157 LINK spdk_top 00:02:46.090 LINK iscsi_fuzz 00:02:47.989 LINK esnap 00:02:48.247 00:02:48.247 real 0m38.724s 00:02:48.247 user 6m4.502s 00:02:48.247 sys 5m29.597s 00:02:48.247 00:18:14 make -- common/autotest_common.sh@1123 -- $ xtrace_disable 00:02:48.247 00:18:14 make -- common/autotest_common.sh@10 -- $ set +x 00:02:48.247 ************************************ 00:02:48.247 END TEST make 00:02:48.247 ************************************ 00:02:48.247 00:18:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:48.247 00:18:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:48.247 00:18:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:48.247 00:18:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.247 00:18:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:48.247 00:18:14 -- pm/common@44 -- $ pid=1669946 00:02:48.247 00:18:14 -- pm/common@50 -- $ kill -TERM 1669946 00:02:48.247 00:18:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.248 00:18:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:48.248 00:18:14 -- pm/common@44 -- $ pid=1669947 00:02:48.248 00:18:14 -- pm/common@50 -- $ kill -TERM 1669947 00:02:48.248 00:18:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.248 00:18:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:48.248 00:18:14 -- pm/common@44 -- $ pid=1669949 00:02:48.248 00:18:14 -- pm/common@50 -- $ kill -TERM 1669949 00:02:48.248 00:18:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.248 00:18:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:48.248 00:18:14 -- pm/common@44 -- $ pid=1669974 00:02:48.248 00:18:14 -- pm/common@50 -- $ sudo -E kill -TERM 1669974 00:02:48.248 00:18:14 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:02:48.248 00:18:14 -- nvmf/common.sh@7 -- # uname -s 00:02:48.248 00:18:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:48.248 00:18:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:48.248 00:18:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:48.248 00:18:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:48.248 00:18:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:48.248 00:18:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:48.248 00:18:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:48.248 00:18:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:48.248 00:18:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:48.248 00:18:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:48.248 00:18:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:02:48.248 00:18:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:02:48.248 00:18:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:48.248 00:18:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:48.248 00:18:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:48.248 00:18:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:48.248 00:18:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:02:48.248 00:18:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:48.248 00:18:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:48.248 00:18:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:48.248 00:18:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.248 00:18:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.248 00:18:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.248 00:18:14 -- paths/export.sh@5 -- # export PATH 00:02:48.248 00:18:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:48.248 00:18:14 -- nvmf/common.sh@47 -- # : 0 00:02:48.248 00:18:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:48.248 00:18:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:48.248 00:18:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:48.248 00:18:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:48.248 00:18:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:48.248 00:18:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:48.248 00:18:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:48.248 00:18:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:48.248 00:18:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:48.248 00:18:14 -- spdk/autotest.sh@32 -- # uname -s 00:02:48.248 00:18:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:48.248 00:18:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:48.248 00:18:14 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:02:48.248 00:18:14 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:48.248 00:18:14 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:02:48.248 00:18:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:48.248 00:18:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:48.248 00:18:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:48.248 00:18:14 -- spdk/autotest.sh@48 -- # udevadm_pid=1729490 00:02:48.248 00:18:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:48.248 00:18:14 -- pm/common@17 -- # local monitor 00:02:48.248 00:18:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.248 00:18:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.248 00:18:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.248 00:18:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:48.248 00:18:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:48.248 00:18:14 -- pm/common@25 -- # sleep 1 00:02:48.248 00:18:14 -- pm/common@21 -- # date +%s 00:02:48.248 00:18:14 -- pm/common@21 -- # date +%s 00:02:48.248 00:18:14 -- pm/common@21 -- # date +%s 00:02:48.248 00:18:14 -- pm/common@21 -- # date +%s 00:02:48.248 00:18:14 -- pm/common@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715725094 00:02:48.248 00:18:14 -- pm/common@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715725094 00:02:48.248 00:18:14 -- pm/common@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715725094 00:02:48.248 00:18:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1715725094 00:02:48.248 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715725094_collect-cpu-load.pm.log 00:02:48.248 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715725094_collect-vmstat.pm.log 00:02:48.248 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715725094_collect-cpu-temp.pm.log 00:02:48.248 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1715725094_collect-bmc-pm.bmc.pm.log 00:02:49.183 00:18:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:49.183 00:18:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:49.183 00:18:15 -- common/autotest_common.sh@721 -- # xtrace_disable 00:02:49.183 00:18:15 -- common/autotest_common.sh@10 -- # set +x 00:02:49.183 00:18:15 -- spdk/autotest.sh@59 -- # create_test_list 00:02:49.183 00:18:15 -- common/autotest_common.sh@745 -- # xtrace_disable 00:02:49.183 00:18:15 -- common/autotest_common.sh@10 -- # set +x 00:02:49.183 00:18:15 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/autotest.sh 00:02:49.183 00:18:15 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:49.183 00:18:15 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:49.183 00:18:15 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:02:49.183 00:18:15 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:02:49.183 00:18:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:49.183 00:18:15 -- common/autotest_common.sh@1452 -- # uname 00:02:49.183 00:18:15 -- common/autotest_common.sh@1452 -- # '[' Linux = FreeBSD ']' 00:02:49.183 00:18:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:49.183 00:18:15 -- common/autotest_common.sh@1472 -- # uname 00:02:49.183 00:18:15 -- common/autotest_common.sh@1472 -- # [[ Linux = FreeBSD ]] 00:02:49.183 00:18:15 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:49.183 00:18:15 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:49.183 00:18:15 -- spdk/autotest.sh@72 -- # hash lcov 00:02:49.183 00:18:15 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:49.183 00:18:15 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:49.183 --rc lcov_branch_coverage=1 00:02:49.183 --rc lcov_function_coverage=1 00:02:49.183 --rc genhtml_branch_coverage=1 00:02:49.183 --rc genhtml_function_coverage=1 00:02:49.183 --rc genhtml_legend=1 00:02:49.183 --rc geninfo_all_blocks=1 00:02:49.183 ' 00:02:49.183 00:18:15 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:49.183 --rc lcov_branch_coverage=1 00:02:49.183 --rc lcov_function_coverage=1 00:02:49.183 --rc genhtml_branch_coverage=1 00:02:49.183 --rc genhtml_function_coverage=1 00:02:49.183 --rc genhtml_legend=1 00:02:49.183 --rc geninfo_all_blocks=1 00:02:49.183 ' 00:02:49.183 00:18:15 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:49.183 --rc lcov_branch_coverage=1 00:02:49.183 --rc lcov_function_coverage=1 00:02:49.183 --rc genhtml_branch_coverage=1 00:02:49.183 --rc genhtml_function_coverage=1 00:02:49.183 --rc genhtml_legend=1 00:02:49.183 --rc geninfo_all_blocks=1 00:02:49.183 --no-external' 00:02:49.183 00:18:15 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:49.183 --rc lcov_branch_coverage=1 00:02:49.183 --rc lcov_function_coverage=1 00:02:49.183 --rc genhtml_branch_coverage=1 00:02:49.183 --rc genhtml_function_coverage=1 00:02:49.183 --rc genhtml_legend=1 00:02:49.183 --rc geninfo_all_blocks=1 00:02:49.183 --no-external' 00:02:49.183 00:18:15 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:49.441 lcov: LCOV version 1.14 00:02:49.441 00:18:15 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/dsa-phy-autotest/spdk -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info 00:02:55.993 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:55.993 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:55.993 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:55.993 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:55.993 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:55.993 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:55.993 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:55.993 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:04.101 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:04.101 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:04.102 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:04.102 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:04.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:04.103 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:04.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:04.103 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:04.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:04.103 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:04.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:04.103 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:04.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:04.103 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:04.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:04.103 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:04.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:04.103 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:04.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:04.103 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:04.103 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:05.035 00:18:30 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:05.035 00:18:30 -- common/autotest_common.sh@721 -- # xtrace_disable 00:03:05.035 00:18:30 -- common/autotest_common.sh@10 -- # set +x 00:03:05.035 00:18:30 -- spdk/autotest.sh@91 -- # rm -f 00:03:05.035 00:18:31 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.332 0000:c9:00.0 (8086 0a54): Already using the nvme driver 00:03:08.332 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:03:08.332 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:03:08.332 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:03:08.332 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:03:08.332 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:03:08.332 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:03:08.332 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:03:08.332 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:03:08.332 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:03:08.332 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:03:08.332 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:03:08.332 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:03:08.332 0000:ca:00.0 (8086 0a54): Already using the nvme driver 00:03:08.332 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:03:08.332 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:03:08.332 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:03:08.332 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:03:08.592 00:18:34 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:08.592 00:18:34 -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:08.592 00:18:34 -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:08.592 00:18:34 -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:08.592 00:18:34 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:08.592 00:18:34 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:08.592 00:18:34 -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:08.592 00:18:34 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:08.592 00:18:34 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:08.592 00:18:34 -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:08.592 00:18:34 -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:03:08.592 00:18:34 -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:03:08.592 00:18:34 -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:08.592 00:18:34 -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:08.592 00:18:34 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:08.592 00:18:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:08.592 00:18:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:08.592 00:18:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:08.592 00:18:34 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:08.592 00:18:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:08.592 No valid GPT data, bailing 00:03:08.852 00:18:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:08.852 00:18:34 -- scripts/common.sh@391 -- # pt= 00:03:08.852 00:18:34 -- scripts/common.sh@392 -- # return 1 00:03:08.852 00:18:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:08.852 1+0 records in 00:03:08.852 1+0 records out 00:03:08.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00296597 s, 354 MB/s 00:03:08.852 00:18:34 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:08.852 00:18:34 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:08.852 00:18:34 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:03:08.852 00:18:34 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:03:08.852 00:18:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:08.852 No valid GPT data, bailing 00:03:08.852 00:18:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:08.852 00:18:34 -- scripts/common.sh@391 -- # pt= 00:03:08.852 00:18:34 -- scripts/common.sh@392 -- # return 1 00:03:08.852 00:18:34 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:08.852 1+0 records in 00:03:08.852 1+0 records out 00:03:08.852 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00275011 s, 381 MB/s 00:03:08.852 00:18:34 -- spdk/autotest.sh@118 -- # sync 00:03:08.852 00:18:34 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:08.852 00:18:34 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:08.852 00:18:34 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:14.133 00:18:40 -- spdk/autotest.sh@124 -- # uname -s 00:03:14.133 00:18:40 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:14.133 00:18:40 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:03:14.133 00:18:40 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:14.133 00:18:40 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:14.133 00:18:40 -- common/autotest_common.sh@10 -- # set +x 00:03:14.133 ************************************ 00:03:14.133 START TEST setup.sh 00:03:14.133 ************************************ 00:03:14.133 00:18:40 setup.sh -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:03:14.133 * Looking for test storage... 00:03:14.133 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:14.133 00:18:40 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:14.133 00:18:40 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:14.133 00:18:40 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:03:14.133 00:18:40 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:14.133 00:18:40 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:14.133 00:18:40 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:14.133 ************************************ 00:03:14.133 START TEST acl 00:03:14.133 ************************************ 00:03:14.133 00:18:40 setup.sh.acl -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:03:14.393 * Looking for test storage... 00:03:14.393 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:14.393 00:18:40 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:14.393 00:18:40 setup.sh.acl -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:03:14.393 00:18:40 setup.sh.acl -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:03:14.393 00:18:40 setup.sh.acl -- common/autotest_common.sh@1667 -- # local nvme bdf 00:03:14.393 00:18:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:14.393 00:18:40 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:03:14.393 00:18:40 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:03:14.393 00:18:40 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:14.393 00:18:40 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:14.393 00:18:40 setup.sh.acl -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:03:14.393 00:18:40 setup.sh.acl -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:03:14.393 00:18:40 setup.sh.acl -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:03:14.393 00:18:40 setup.sh.acl -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:14.393 00:18:40 setup.sh.acl -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:03:14.393 00:18:40 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:14.393 00:18:40 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:14.393 00:18:40 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:14.393 00:18:40 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:14.393 00:18:40 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:14.393 00:18:40 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:14.393 00:18:40 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.689 00:18:43 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:17.689 00:18:43 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:17.689 00:18:43 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:17.689 00:18:43 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:17.689 00:18:43 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.689 00:18:43 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:03:20.232 Hugepages 00:03:20.232 node hugesize free / total 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.232 00:03:20.232 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:6a:01.0 == *:*:*.* ]] 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.232 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:6a:02.0 == *:*:*.* ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:6f:01.0 == *:*:*.* ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:6f:02.0 == *:*:*.* ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:74:01.0 == *:*:*.* ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:74:02.0 == *:*:*.* ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:79:01.0 == *:*:*.* ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:79:02.0 == *:*:*.* ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:c9:00.0 == *:*:*.* ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:20.233 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:ca:00.0 == *:*:*.* ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\a\:\0\0\.\0* ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:e7:01.0 == *:*:*.* ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:e7:02.0 == *:*:*.* ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:ec:01.0 == *:*:*.* ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:ec:02.0 == *:*:*.* ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:f1:01.0 == *:*:*.* ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:f1:02.0 == *:*:*.* ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:f6:01.0 == *:*:*.* ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:f6:02.0 == *:*:*.* ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:03:20.493 00:18:46 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:20.493 00:18:46 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:20.493 00:18:46 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:20.493 00:18:46 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:20.493 ************************************ 00:03:20.493 START TEST denied 00:03:20.493 ************************************ 00:03:20.493 00:18:46 setup.sh.acl.denied -- common/autotest_common.sh@1122 -- # denied 00:03:20.493 00:18:46 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:c9:00.0' 00:03:20.493 00:18:46 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:c9:00.0' 00:03:20.493 00:18:46 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:20.493 00:18:46 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.493 00:18:46 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:25.840 0000:c9:00.0 (8086 0a54): Skipping denied controller at 0000:c9:00.0 00:03:25.840 00:18:51 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:c9:00.0 00:03:25.840 00:18:51 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:25.840 00:18:51 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:25.840 00:18:51 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:c9:00.0 ]] 00:03:25.840 00:18:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:c9:00.0/driver 00:03:25.840 00:18:51 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:25.840 00:18:51 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:25.840 00:18:51 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:25.840 00:18:51 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:25.840 00:18:51 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:31.122 00:03:31.122 real 0m10.389s 00:03:31.122 user 0m2.145s 00:03:31.122 sys 0m4.223s 00:03:31.122 00:18:56 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:31.122 00:18:56 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:31.122 ************************************ 00:03:31.122 END TEST denied 00:03:31.122 ************************************ 00:03:31.122 00:18:56 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:31.122 00:18:56 setup.sh.acl -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:31.122 00:18:56 setup.sh.acl -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:31.122 00:18:56 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:31.122 ************************************ 00:03:31.122 START TEST allowed 00:03:31.122 ************************************ 00:03:31.122 00:18:56 setup.sh.acl.allowed -- common/autotest_common.sh@1122 -- # allowed 00:03:31.122 00:18:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:c9:00.0 00:03:31.122 00:18:56 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:c9:00.0 .*: nvme -> .*' 00:03:31.122 00:18:56 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:31.122 00:18:56 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:31.122 00:18:56 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:36.405 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:03:36.405 00:19:02 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 0000:ca:00.0 00:03:36.405 00:19:02 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:36.405 00:19:02 setup.sh.acl.allowed -- setup/acl.sh@30 -- # for dev in "$@" 00:03:36.405 00:19:02 setup.sh.acl.allowed -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:ca:00.0 ]] 00:03:36.405 00:19:02 setup.sh.acl.allowed -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:ca:00.0/driver 00:03:36.405 00:19:02 setup.sh.acl.allowed -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:36.405 00:19:02 setup.sh.acl.allowed -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:36.405 00:19:02 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:36.405 00:19:02 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:36.405 00:19:02 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.704 00:03:39.704 real 0m8.862s 00:03:39.704 user 0m2.180s 00:03:39.704 sys 0m4.338s 00:03:39.704 00:19:05 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:39.704 00:19:05 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:39.704 ************************************ 00:03:39.704 END TEST allowed 00:03:39.704 ************************************ 00:03:39.704 00:03:39.704 real 0m25.583s 00:03:39.704 user 0m6.402s 00:03:39.704 sys 0m12.649s 00:03:39.704 00:19:05 setup.sh.acl -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:39.704 00:19:05 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:39.704 ************************************ 00:03:39.704 END TEST acl 00:03:39.704 ************************************ 00:03:39.967 00:19:05 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:03:39.967 00:19:05 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:39.967 00:19:05 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:39.967 00:19:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:39.967 ************************************ 00:03:39.967 START TEST hugepages 00:03:39.967 ************************************ 00:03:39.967 00:19:05 setup.sh.hugepages -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:03:39.967 * Looking for test storage... 00:03:39.967 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 241057760 kB' 'MemAvailable: 243690392 kB' 'Buffers: 2696 kB' 'Cached: 9357760 kB' 'SwapCached: 0 kB' 'Active: 6447772 kB' 'Inactive: 3418872 kB' 'Active(anon): 5881804 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 515480 kB' 'Mapped: 166972 kB' 'Shmem: 5375616 kB' 'KReclaimable: 253688 kB' 'Slab: 815488 kB' 'SReclaimable: 253688 kB' 'SUnreclaim: 561800 kB' 'KernelStack: 24944 kB' 'PageTables: 7896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 135570692 kB' 'Committed_AS: 7304980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329296 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.967 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.968 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:39.969 00:19:06 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:39.969 00:19:06 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:39.969 00:19:06 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:39.969 00:19:06 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:39.969 ************************************ 00:03:39.969 START TEST default_setup 00:03:39.969 ************************************ 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1122 -- # default_setup 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:39.969 00:19:06 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:43.267 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:43.267 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:43.267 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:43.267 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:43.267 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:43.267 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:43.267 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:43.267 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:43.267 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:43.267 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:43.267 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:43.267 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:43.267 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:43.267 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:43.267 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:43.267 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:03:44.652 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:03:45.223 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243335376 kB' 'MemAvailable: 245967092 kB' 'Buffers: 2696 kB' 'Cached: 9358020 kB' 'SwapCached: 0 kB' 'Active: 6470536 kB' 'Inactive: 3418872 kB' 'Active(anon): 5904568 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538048 kB' 'Mapped: 166920 kB' 'Shmem: 5375876 kB' 'KReclaimable: 251856 kB' 'Slab: 809864 kB' 'SReclaimable: 251856 kB' 'SUnreclaim: 558008 kB' 'KernelStack: 24736 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7366072 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329264 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.485 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.486 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243333504 kB' 'MemAvailable: 245965220 kB' 'Buffers: 2696 kB' 'Cached: 9358024 kB' 'SwapCached: 0 kB' 'Active: 6470608 kB' 'Inactive: 3418872 kB' 'Active(anon): 5904640 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538096 kB' 'Mapped: 166908 kB' 'Shmem: 5375880 kB' 'KReclaimable: 251856 kB' 'Slab: 809856 kB' 'SReclaimable: 251856 kB' 'SUnreclaim: 558000 kB' 'KernelStack: 24720 kB' 'PageTables: 8620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7366092 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329216 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.487 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.488 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243330984 kB' 'MemAvailable: 245962700 kB' 'Buffers: 2696 kB' 'Cached: 9358044 kB' 'SwapCached: 0 kB' 'Active: 6470632 kB' 'Inactive: 3418872 kB' 'Active(anon): 5904664 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538096 kB' 'Mapped: 166908 kB' 'Shmem: 5375900 kB' 'KReclaimable: 251856 kB' 'Slab: 809856 kB' 'SReclaimable: 251856 kB' 'SUnreclaim: 558000 kB' 'KernelStack: 24736 kB' 'PageTables: 8624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7366112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329216 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.753 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.754 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:45.755 nr_hugepages=1024 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.755 resv_hugepages=0 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.755 surplus_hugepages=0 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.755 anon_hugepages=0 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243328496 kB' 'MemAvailable: 245960212 kB' 'Buffers: 2696 kB' 'Cached: 9358064 kB' 'SwapCached: 0 kB' 'Active: 6470620 kB' 'Inactive: 3418872 kB' 'Active(anon): 5904652 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538072 kB' 'Mapped: 166908 kB' 'Shmem: 5375920 kB' 'KReclaimable: 251856 kB' 'Slab: 809856 kB' 'SReclaimable: 251856 kB' 'SUnreclaim: 558000 kB' 'KernelStack: 24720 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7366136 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329200 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.755 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.756 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 119923900 kB' 'MemUsed: 11892328 kB' 'SwapCached: 0 kB' 'Active: 4424488 kB' 'Inactive: 3310852 kB' 'Active(anon): 4245188 kB' 'Inactive(anon): 0 kB' 'Active(file): 179300 kB' 'Inactive(file): 3310852 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7611296 kB' 'Mapped: 71312 kB' 'AnonPages: 133252 kB' 'Shmem: 4121144 kB' 'KernelStack: 12920 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138612 kB' 'Slab: 457252 kB' 'SReclaimable: 138612 kB' 'SUnreclaim: 318640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.757 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.758 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:45.758 node0=1024 expecting 1024 00:03:45.759 00:19:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:45.759 00:03:45.759 real 0m5.618s 00:03:45.759 user 0m1.166s 00:03:45.759 sys 0m2.111s 00:03:45.759 00:19:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:45.759 00:19:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:45.759 ************************************ 00:03:45.759 END TEST default_setup 00:03:45.759 ************************************ 00:03:45.759 00:19:11 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:45.759 00:19:11 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:45.759 00:19:11 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:45.759 00:19:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:45.759 ************************************ 00:03:45.759 START TEST per_node_1G_alloc 00:03:45.759 ************************************ 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1122 -- # per_node_1G_alloc 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.759 00:19:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:48.301 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:48.301 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.301 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:48.301 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:48.301 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:48.301 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:48.301 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:48.301 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:48.301 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:48.301 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:48.301 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:48.301 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:48.301 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:48.301 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:48.301 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:48.301 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:48.301 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:48.301 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243313688 kB' 'MemAvailable: 245945404 kB' 'Buffers: 2696 kB' 'Cached: 9358176 kB' 'SwapCached: 0 kB' 'Active: 6471500 kB' 'Inactive: 3418872 kB' 'Active(anon): 5905532 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538720 kB' 'Mapped: 166940 kB' 'Shmem: 5376032 kB' 'KReclaimable: 251856 kB' 'Slab: 808464 kB' 'SReclaimable: 251856 kB' 'SUnreclaim: 556608 kB' 'KernelStack: 24800 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7366864 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329136 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.878 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.879 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243314100 kB' 'MemAvailable: 245945816 kB' 'Buffers: 2696 kB' 'Cached: 9358180 kB' 'SwapCached: 0 kB' 'Active: 6471764 kB' 'Inactive: 3418872 kB' 'Active(anon): 5905796 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 539072 kB' 'Mapped: 166940 kB' 'Shmem: 5376036 kB' 'KReclaimable: 251856 kB' 'Slab: 808500 kB' 'SReclaimable: 251856 kB' 'SUnreclaim: 556644 kB' 'KernelStack: 24832 kB' 'PageTables: 8688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7366884 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329120 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.880 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.881 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243314680 kB' 'MemAvailable: 245946396 kB' 'Buffers: 2696 kB' 'Cached: 9358196 kB' 'SwapCached: 0 kB' 'Active: 6471092 kB' 'Inactive: 3418872 kB' 'Active(anon): 5905124 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538376 kB' 'Mapped: 166932 kB' 'Shmem: 5376052 kB' 'KReclaimable: 251856 kB' 'Slab: 808500 kB' 'SReclaimable: 251856 kB' 'SUnreclaim: 556644 kB' 'KernelStack: 24816 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7366904 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329120 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.882 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.883 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:48.884 nr_hugepages=1024 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:48.884 resv_hugepages=0 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:48.884 surplus_hugepages=0 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:48.884 anon_hugepages=0 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243315392 kB' 'MemAvailable: 245947108 kB' 'Buffers: 2696 kB' 'Cached: 9358220 kB' 'SwapCached: 0 kB' 'Active: 6471208 kB' 'Inactive: 3418872 kB' 'Active(anon): 5905240 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538408 kB' 'Mapped: 166932 kB' 'Shmem: 5376076 kB' 'KReclaimable: 251856 kB' 'Slab: 808500 kB' 'SReclaimable: 251856 kB' 'SUnreclaim: 556644 kB' 'KernelStack: 24832 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7366928 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329120 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.884 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.885 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 120976540 kB' 'MemUsed: 10839688 kB' 'SwapCached: 0 kB' 'Active: 4423076 kB' 'Inactive: 3310852 kB' 'Active(anon): 4243776 kB' 'Inactive(anon): 0 kB' 'Active(file): 179300 kB' 'Inactive(file): 3310852 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7611328 kB' 'Mapped: 71336 kB' 'AnonPages: 131704 kB' 'Shmem: 4121176 kB' 'KernelStack: 13032 kB' 'PageTables: 3892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138612 kB' 'Slab: 456004 kB' 'SReclaimable: 138612 kB' 'SUnreclaim: 317392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:14 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.886 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742256 kB' 'MemFree: 122338880 kB' 'MemUsed: 4403376 kB' 'SwapCached: 0 kB' 'Active: 2048500 kB' 'Inactive: 108020 kB' 'Active(anon): 1661832 kB' 'Inactive(anon): 0 kB' 'Active(file): 386668 kB' 'Inactive(file): 108020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1749628 kB' 'Mapped: 95596 kB' 'AnonPages: 407044 kB' 'Shmem: 1254940 kB' 'KernelStack: 11800 kB' 'PageTables: 4792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113244 kB' 'Slab: 352496 kB' 'SReclaimable: 113244 kB' 'SUnreclaim: 239252 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.887 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.888 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:48.889 node0=512 expecting 512 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:48.889 node1=512 expecting 512 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:48.889 00:03:48.889 real 0m3.250s 00:03:48.889 user 0m1.056s 00:03:48.889 sys 0m2.066s 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:48.889 00:19:15 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:48.889 ************************************ 00:03:48.889 END TEST per_node_1G_alloc 00:03:48.889 ************************************ 00:03:49.150 00:19:15 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:49.150 00:19:15 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:49.150 00:19:15 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:49.150 00:19:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:49.150 ************************************ 00:03:49.150 START TEST even_2G_alloc 00:03:49.150 ************************************ 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1122 -- # even_2G_alloc 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:49.150 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:49.151 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:49.151 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.151 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:49.151 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:49.151 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:49.151 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:49.151 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:49.151 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:49.151 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:49.151 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.151 00:19:15 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:52.452 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:52.452 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.452 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:52.452 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:52.452 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:52.452 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:52.452 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:52.452 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:52.452 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:52.452 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:52.452 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:52.452 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:52.452 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:52.452 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:52.452 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:52.452 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:52.452 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:52.452 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:52.452 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:52.452 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:52.452 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.452 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.452 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:52.452 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243341936 kB' 'MemAvailable: 245973636 kB' 'Buffers: 2696 kB' 'Cached: 9358348 kB' 'SwapCached: 0 kB' 'Active: 6462432 kB' 'Inactive: 3418872 kB' 'Active(anon): 5896464 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529516 kB' 'Mapped: 165956 kB' 'Shmem: 5376204 kB' 'KReclaimable: 251824 kB' 'Slab: 808104 kB' 'SReclaimable: 251824 kB' 'SUnreclaim: 556280 kB' 'KernelStack: 24608 kB' 'PageTables: 7496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7309840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329024 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.453 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243341432 kB' 'MemAvailable: 245973132 kB' 'Buffers: 2696 kB' 'Cached: 9358348 kB' 'SwapCached: 0 kB' 'Active: 6463280 kB' 'Inactive: 3418872 kB' 'Active(anon): 5897312 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530372 kB' 'Mapped: 165956 kB' 'Shmem: 5376204 kB' 'KReclaimable: 251824 kB' 'Slab: 808112 kB' 'SReclaimable: 251824 kB' 'SUnreclaim: 556288 kB' 'KernelStack: 24688 kB' 'PageTables: 7716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7309744 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329040 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.454 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.455 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243342252 kB' 'MemAvailable: 245973952 kB' 'Buffers: 2696 kB' 'Cached: 9358348 kB' 'SwapCached: 0 kB' 'Active: 6462808 kB' 'Inactive: 3418872 kB' 'Active(anon): 5896840 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529796 kB' 'Mapped: 165956 kB' 'Shmem: 5376204 kB' 'KReclaimable: 251824 kB' 'Slab: 808104 kB' 'SReclaimable: 251824 kB' 'SUnreclaim: 556280 kB' 'KernelStack: 24608 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7309764 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329024 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.456 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.457 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.458 nr_hugepages=1024 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.458 resv_hugepages=0 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.458 surplus_hugepages=0 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.458 anon_hugepages=0 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243342408 kB' 'MemAvailable: 245974108 kB' 'Buffers: 2696 kB' 'Cached: 9358392 kB' 'SwapCached: 0 kB' 'Active: 6462868 kB' 'Inactive: 3418872 kB' 'Active(anon): 5896900 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529876 kB' 'Mapped: 165956 kB' 'Shmem: 5376248 kB' 'KReclaimable: 251824 kB' 'Slab: 808212 kB' 'SReclaimable: 251824 kB' 'SUnreclaim: 556388 kB' 'KernelStack: 24656 kB' 'PageTables: 7640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7309788 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329024 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.458 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.459 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 120981384 kB' 'MemUsed: 10834844 kB' 'SwapCached: 0 kB' 'Active: 4418112 kB' 'Inactive: 3310852 kB' 'Active(anon): 4238812 kB' 'Inactive(anon): 0 kB' 'Active(file): 179300 kB' 'Inactive(file): 3310852 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7611368 kB' 'Mapped: 70360 kB' 'AnonPages: 126640 kB' 'Shmem: 4121216 kB' 'KernelStack: 12856 kB' 'PageTables: 2888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138548 kB' 'Slab: 455484 kB' 'SReclaimable: 138548 kB' 'SUnreclaim: 316936 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.460 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.461 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742256 kB' 'MemFree: 122359260 kB' 'MemUsed: 4382996 kB' 'SwapCached: 0 kB' 'Active: 2044768 kB' 'Inactive: 108020 kB' 'Active(anon): 1658100 kB' 'Inactive(anon): 0 kB' 'Active(file): 386668 kB' 'Inactive(file): 108020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1749760 kB' 'Mapped: 95596 kB' 'AnonPages: 403204 kB' 'Shmem: 1255072 kB' 'KernelStack: 11784 kB' 'PageTables: 4704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113276 kB' 'Slab: 352728 kB' 'SReclaimable: 113276 kB' 'SUnreclaim: 239452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.462 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:52.463 node0=512 expecting 512 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:52.463 node1=512 expecting 512 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:52.463 00:03:52.463 real 0m3.407s 00:03:52.463 user 0m1.111s 00:03:52.463 sys 0m2.160s 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:52.463 00:19:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:52.463 ************************************ 00:03:52.463 END TEST even_2G_alloc 00:03:52.463 ************************************ 00:03:52.463 00:19:18 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:52.463 00:19:18 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:52.463 00:19:18 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:52.463 00:19:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:52.463 ************************************ 00:03:52.463 START TEST odd_alloc 00:03:52.463 ************************************ 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1122 -- # odd_alloc 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.463 00:19:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:55.001 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:55.001 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.001 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:55.001 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:55.001 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:55.001 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:55.001 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:55.001 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:55.001 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:55.001 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:55.001 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:55.001 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:55.001 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:55.001 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:55.001 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:55.001 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:55.001 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:55.001 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:55.576 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:55.576 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:55.576 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:55.576 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:55.576 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:55.576 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:55.576 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:55.576 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:55.576 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:55.576 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:55.576 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243348548 kB' 'MemAvailable: 245980212 kB' 'Buffers: 2696 kB' 'Cached: 9358516 kB' 'SwapCached: 0 kB' 'Active: 6463700 kB' 'Inactive: 3418872 kB' 'Active(anon): 5897732 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530664 kB' 'Mapped: 165960 kB' 'Shmem: 5376372 kB' 'KReclaimable: 251752 kB' 'Slab: 807384 kB' 'SReclaimable: 251752 kB' 'SUnreclaim: 555632 kB' 'KernelStack: 24704 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618244 kB' 'Committed_AS: 7310416 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329008 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.577 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243349680 kB' 'MemAvailable: 245981344 kB' 'Buffers: 2696 kB' 'Cached: 9358520 kB' 'SwapCached: 0 kB' 'Active: 6463752 kB' 'Inactive: 3418872 kB' 'Active(anon): 5897784 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530768 kB' 'Mapped: 165948 kB' 'Shmem: 5376376 kB' 'KReclaimable: 251752 kB' 'Slab: 807420 kB' 'SReclaimable: 251752 kB' 'SUnreclaim: 555668 kB' 'KernelStack: 24704 kB' 'PageTables: 7552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618244 kB' 'Committed_AS: 7310436 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329008 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.578 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.579 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243349680 kB' 'MemAvailable: 245981344 kB' 'Buffers: 2696 kB' 'Cached: 9358536 kB' 'SwapCached: 0 kB' 'Active: 6464400 kB' 'Inactive: 3418872 kB' 'Active(anon): 5898432 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531356 kB' 'Mapped: 165948 kB' 'Shmem: 5376392 kB' 'KReclaimable: 251752 kB' 'Slab: 807420 kB' 'SReclaimable: 251752 kB' 'SUnreclaim: 555668 kB' 'KernelStack: 24720 kB' 'PageTables: 7596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618244 kB' 'Committed_AS: 7310456 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329008 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.580 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.581 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:55.582 nr_hugepages=1025 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:55.582 resv_hugepages=0 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:55.582 surplus_hugepages=0 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:55.582 anon_hugepages=0 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243349936 kB' 'MemAvailable: 245981600 kB' 'Buffers: 2696 kB' 'Cached: 9358556 kB' 'SwapCached: 0 kB' 'Active: 6463920 kB' 'Inactive: 3418872 kB' 'Active(anon): 5897952 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530840 kB' 'Mapped: 165948 kB' 'Shmem: 5376412 kB' 'KReclaimable: 251752 kB' 'Slab: 807420 kB' 'SReclaimable: 251752 kB' 'SUnreclaim: 555668 kB' 'KernelStack: 24736 kB' 'PageTables: 7640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618244 kB' 'Committed_AS: 7310476 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329008 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.582 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.583 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 120999928 kB' 'MemUsed: 10816300 kB' 'SwapCached: 0 kB' 'Active: 4420312 kB' 'Inactive: 3310852 kB' 'Active(anon): 4241012 kB' 'Inactive(anon): 0 kB' 'Active(file): 179300 kB' 'Inactive(file): 3310852 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7611468 kB' 'Mapped: 70352 kB' 'AnonPages: 128856 kB' 'Shmem: 4121316 kB' 'KernelStack: 12936 kB' 'PageTables: 2948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138476 kB' 'Slab: 454752 kB' 'SReclaimable: 138476 kB' 'SUnreclaim: 316276 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.584 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742256 kB' 'MemFree: 122350456 kB' 'MemUsed: 4391800 kB' 'SwapCached: 0 kB' 'Active: 2043924 kB' 'Inactive: 108020 kB' 'Active(anon): 1657256 kB' 'Inactive(anon): 0 kB' 'Active(file): 386668 kB' 'Inactive(file): 108020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1749804 kB' 'Mapped: 95596 kB' 'AnonPages: 402288 kB' 'Shmem: 1255116 kB' 'KernelStack: 11800 kB' 'PageTables: 4692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113276 kB' 'Slab: 352668 kB' 'SReclaimable: 113276 kB' 'SUnreclaim: 239392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.585 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:55.586 node0=512 expecting 513 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:55.586 node1=513 expecting 512 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:55.586 00:03:55.586 real 0m3.097s 00:03:55.586 user 0m1.093s 00:03:55.586 sys 0m1.857s 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:55.586 00:19:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:55.586 ************************************ 00:03:55.586 END TEST odd_alloc 00:03:55.586 ************************************ 00:03:55.586 00:19:21 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:55.586 00:19:21 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:55.586 00:19:21 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:55.586 00:19:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:55.847 ************************************ 00:03:55.847 START TEST custom_alloc 00:03:55.847 ************************************ 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1122 -- # custom_alloc 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:55.847 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:55.848 00:19:21 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:59.150 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:59.150 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.150 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:59.150 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:59.150 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:59.150 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:59.150 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:59.150 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:59.150 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:59.150 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:59.150 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:59.150 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:59.150 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:59.150 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:59.150 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:59.150 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:59.150 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:03:59.150 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 242291812 kB' 'MemAvailable: 244923460 kB' 'Buffers: 2696 kB' 'Cached: 9358684 kB' 'SwapCached: 0 kB' 'Active: 6465588 kB' 'Inactive: 3418872 kB' 'Active(anon): 5899620 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532500 kB' 'Mapped: 165972 kB' 'Shmem: 5376540 kB' 'KReclaimable: 251720 kB' 'Slab: 807384 kB' 'SReclaimable: 251720 kB' 'SUnreclaim: 555664 kB' 'KernelStack: 24912 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094980 kB' 'Committed_AS: 7314224 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329296 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.150 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.151 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 242290740 kB' 'MemAvailable: 244922388 kB' 'Buffers: 2696 kB' 'Cached: 9358688 kB' 'SwapCached: 0 kB' 'Active: 6465980 kB' 'Inactive: 3418872 kB' 'Active(anon): 5900012 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532840 kB' 'Mapped: 165964 kB' 'Shmem: 5376544 kB' 'KReclaimable: 251720 kB' 'Slab: 807344 kB' 'SReclaimable: 251720 kB' 'SUnreclaim: 555624 kB' 'KernelStack: 25008 kB' 'PageTables: 8008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094980 kB' 'Committed_AS: 7314244 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329248 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.152 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.153 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 242289568 kB' 'MemAvailable: 244921216 kB' 'Buffers: 2696 kB' 'Cached: 9358708 kB' 'SwapCached: 0 kB' 'Active: 6465736 kB' 'Inactive: 3418872 kB' 'Active(anon): 5899768 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532568 kB' 'Mapped: 165964 kB' 'Shmem: 5376564 kB' 'KReclaimable: 251720 kB' 'Slab: 807332 kB' 'SReclaimable: 251720 kB' 'SUnreclaim: 555612 kB' 'KernelStack: 25104 kB' 'PageTables: 8576 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094980 kB' 'Committed_AS: 7314264 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329264 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.154 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.155 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:59.156 nr_hugepages=1536 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.156 resv_hugepages=0 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.156 surplus_hugepages=0 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.156 anon_hugepages=0 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 242287740 kB' 'MemAvailable: 244919388 kB' 'Buffers: 2696 kB' 'Cached: 9358732 kB' 'SwapCached: 0 kB' 'Active: 6465100 kB' 'Inactive: 3418872 kB' 'Active(anon): 5899132 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531876 kB' 'Mapped: 165964 kB' 'Shmem: 5376588 kB' 'KReclaimable: 251720 kB' 'Slab: 807332 kB' 'SReclaimable: 251720 kB' 'SUnreclaim: 555612 kB' 'KernelStack: 24960 kB' 'PageTables: 7968 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094980 kB' 'Committed_AS: 7314284 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329216 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.156 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.157 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 120988048 kB' 'MemUsed: 10828180 kB' 'SwapCached: 0 kB' 'Active: 4419396 kB' 'Inactive: 3310852 kB' 'Active(anon): 4240096 kB' 'Inactive(anon): 0 kB' 'Active(file): 179300 kB' 'Inactive(file): 3310852 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7611528 kB' 'Mapped: 70368 kB' 'AnonPages: 127832 kB' 'Shmem: 4121376 kB' 'KernelStack: 13240 kB' 'PageTables: 3244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138476 kB' 'Slab: 454740 kB' 'SReclaimable: 138476 kB' 'SUnreclaim: 316264 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.158 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742256 kB' 'MemFree: 121298488 kB' 'MemUsed: 5443768 kB' 'SwapCached: 0 kB' 'Active: 2046468 kB' 'Inactive: 108020 kB' 'Active(anon): 1659800 kB' 'Inactive(anon): 0 kB' 'Active(file): 386668 kB' 'Inactive(file): 108020 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1749936 kB' 'Mapped: 95596 kB' 'AnonPages: 404756 kB' 'Shmem: 1255248 kB' 'KernelStack: 11752 kB' 'PageTables: 4608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 113244 kB' 'Slab: 352592 kB' 'SReclaimable: 113244 kB' 'SUnreclaim: 239348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.159 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.160 node0=512 expecting 512 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:59.160 node1=1024 expecting 1024 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:59.160 00:03:59.160 real 0m3.452s 00:03:59.160 user 0m1.174s 00:03:59.160 sys 0m2.150s 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:03:59.160 00:19:25 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.160 ************************************ 00:03:59.160 END TEST custom_alloc 00:03:59.160 ************************************ 00:03:59.160 00:19:25 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:59.160 00:19:25 setup.sh.hugepages -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:03:59.160 00:19:25 setup.sh.hugepages -- common/autotest_common.sh@1104 -- # xtrace_disable 00:03:59.160 00:19:25 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.160 ************************************ 00:03:59.160 START TEST no_shrink_alloc 00:03:59.160 ************************************ 00:03:59.160 00:19:25 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1122 -- # no_shrink_alloc 00:03:59.160 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:59.160 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:59.160 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:59.160 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:59.160 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:59.160 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:59.160 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.160 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:59.160 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:59.160 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:59.161 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.161 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:59.161 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.161 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.161 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.161 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:59.161 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:59.161 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:59.161 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:59.161 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:59.161 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.161 00:19:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:04:02.536 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:02.536 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.536 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:02.536 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:02.536 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:02.536 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:02.536 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:02.536 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:02.536 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:02.536 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:02.536 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:02.536 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:02.536 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:02.536 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:02.536 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:02.536 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:02.536 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:02.536 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:02.536 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:02.536 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:02.536 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:02.536 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:02.536 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:02.536 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:02.536 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:02.536 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:02.536 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:02.536 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:02.536 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.536 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.536 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243342692 kB' 'MemAvailable: 245974340 kB' 'Buffers: 2696 kB' 'Cached: 9358856 kB' 'SwapCached: 0 kB' 'Active: 6466368 kB' 'Inactive: 3418872 kB' 'Active(anon): 5900400 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532820 kB' 'Mapped: 166044 kB' 'Shmem: 5376712 kB' 'KReclaimable: 251720 kB' 'Slab: 807656 kB' 'SReclaimable: 251720 kB' 'SUnreclaim: 555936 kB' 'KernelStack: 24944 kB' 'PageTables: 8140 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7313128 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329408 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.537 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243341580 kB' 'MemAvailable: 245973228 kB' 'Buffers: 2696 kB' 'Cached: 9358856 kB' 'SwapCached: 0 kB' 'Active: 6466816 kB' 'Inactive: 3418872 kB' 'Active(anon): 5900848 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533188 kB' 'Mapped: 166044 kB' 'Shmem: 5376712 kB' 'KReclaimable: 251720 kB' 'Slab: 807656 kB' 'SReclaimable: 251720 kB' 'SUnreclaim: 555936 kB' 'KernelStack: 24992 kB' 'PageTables: 8064 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7314760 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329376 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.538 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.539 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243340600 kB' 'MemAvailable: 245972248 kB' 'Buffers: 2696 kB' 'Cached: 9358876 kB' 'SwapCached: 0 kB' 'Active: 6466496 kB' 'Inactive: 3418872 kB' 'Active(anon): 5900528 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532852 kB' 'Mapped: 165984 kB' 'Shmem: 5376732 kB' 'KReclaimable: 251720 kB' 'Slab: 807660 kB' 'SReclaimable: 251720 kB' 'SUnreclaim: 555940 kB' 'KernelStack: 24928 kB' 'PageTables: 8056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7314784 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329344 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.540 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.541 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:02.542 nr_hugepages=1024 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:02.542 resv_hugepages=0 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:02.542 surplus_hugepages=0 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:02.542 anon_hugepages=0 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243340676 kB' 'MemAvailable: 245972324 kB' 'Buffers: 2696 kB' 'Cached: 9358896 kB' 'SwapCached: 0 kB' 'Active: 6466412 kB' 'Inactive: 3418872 kB' 'Active(anon): 5900444 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532768 kB' 'Mapped: 165976 kB' 'Shmem: 5376752 kB' 'KReclaimable: 251720 kB' 'Slab: 807660 kB' 'SReclaimable: 251720 kB' 'SUnreclaim: 555940 kB' 'KernelStack: 24960 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7313192 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329376 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.542 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.543 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 119932864 kB' 'MemUsed: 11883364 kB' 'SwapCached: 0 kB' 'Active: 4418800 kB' 'Inactive: 3310852 kB' 'Active(anon): 4239500 kB' 'Inactive(anon): 0 kB' 'Active(file): 179300 kB' 'Inactive(file): 3310852 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7611532 kB' 'Mapped: 70388 kB' 'AnonPages: 127084 kB' 'Shmem: 4121380 kB' 'KernelStack: 13192 kB' 'PageTables: 3152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138476 kB' 'Slab: 454364 kB' 'SReclaimable: 138476 kB' 'SUnreclaim: 315888 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.544 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.545 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:02.547 node0=1024 expecting 1024 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.547 00:19:28 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:04:05.086 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:05.086 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.086 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:05.086 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:05.086 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:05.086 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:05.086 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:05.086 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:05.086 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:05.086 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:05.086 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:05.086 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:05.086 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:05.086 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:05.086 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:05.086 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:05.086 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:04:05.086 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:04:05.352 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243353396 kB' 'MemAvailable: 245985044 kB' 'Buffers: 2696 kB' 'Cached: 9358992 kB' 'SwapCached: 0 kB' 'Active: 6466424 kB' 'Inactive: 3418872 kB' 'Active(anon): 5900456 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532968 kB' 'Mapped: 166016 kB' 'Shmem: 5376848 kB' 'KReclaimable: 251720 kB' 'Slab: 807952 kB' 'SReclaimable: 251720 kB' 'SUnreclaim: 556232 kB' 'KernelStack: 25040 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7315568 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329456 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.352 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.353 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243353428 kB' 'MemAvailable: 245985076 kB' 'Buffers: 2696 kB' 'Cached: 9358992 kB' 'SwapCached: 0 kB' 'Active: 6467300 kB' 'Inactive: 3418872 kB' 'Active(anon): 5901332 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533844 kB' 'Mapped: 166016 kB' 'Shmem: 5376848 kB' 'KReclaimable: 251720 kB' 'Slab: 808004 kB' 'SReclaimable: 251720 kB' 'SUnreclaim: 556284 kB' 'KernelStack: 25056 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7313968 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329408 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.354 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243356676 kB' 'MemAvailable: 245988324 kB' 'Buffers: 2696 kB' 'Cached: 9359012 kB' 'SwapCached: 0 kB' 'Active: 6466640 kB' 'Inactive: 3418872 kB' 'Active(anon): 5900672 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533064 kB' 'Mapped: 166000 kB' 'Shmem: 5376868 kB' 'KReclaimable: 251720 kB' 'Slab: 808164 kB' 'SReclaimable: 251720 kB' 'SUnreclaim: 556444 kB' 'KernelStack: 24896 kB' 'PageTables: 7956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7315608 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329296 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.355 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.356 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:05.357 nr_hugepages=1024 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:05.357 resv_hugepages=0 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:05.357 surplus_hugepages=0 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:05.357 anon_hugepages=0 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.357 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558484 kB' 'MemFree: 243356696 kB' 'MemAvailable: 245988344 kB' 'Buffers: 2696 kB' 'Cached: 9359036 kB' 'SwapCached: 0 kB' 'Active: 6465812 kB' 'Inactive: 3418872 kB' 'Active(anon): 5899844 kB' 'Inactive(anon): 0 kB' 'Active(file): 565968 kB' 'Inactive(file): 3418872 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532228 kB' 'Mapped: 166000 kB' 'Shmem: 5376892 kB' 'KReclaimable: 251720 kB' 'Slab: 808000 kB' 'SReclaimable: 251720 kB' 'SUnreclaim: 556280 kB' 'KernelStack: 24848 kB' 'PageTables: 7384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619268 kB' 'Committed_AS: 7315628 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329280 kB' 'VmallocChunk: 0 kB' 'Percpu: 81920 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2527296 kB' 'DirectMap2M: 17172480 kB' 'DirectMap1G: 250609664 kB' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.358 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816228 kB' 'MemFree: 119945248 kB' 'MemUsed: 11870980 kB' 'SwapCached: 0 kB' 'Active: 4418884 kB' 'Inactive: 3310852 kB' 'Active(anon): 4239584 kB' 'Inactive(anon): 0 kB' 'Active(file): 179300 kB' 'Inactive(file): 3310852 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7611580 kB' 'Mapped: 70404 kB' 'AnonPages: 127316 kB' 'Shmem: 4121428 kB' 'KernelStack: 12984 kB' 'PageTables: 2952 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 138476 kB' 'Slab: 455148 kB' 'SReclaimable: 138476 kB' 'SUnreclaim: 316672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.359 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.360 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:05.361 node0=1024 expecting 1024 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:05.361 00:04:05.361 real 0m6.219s 00:04:05.361 user 0m2.114s 00:04:05.361 sys 0m3.796s 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:05.361 00:19:31 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:05.361 ************************************ 00:04:05.361 END TEST no_shrink_alloc 00:04:05.361 ************************************ 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:05.621 00:19:31 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:05.621 00:04:05.621 real 0m25.585s 00:04:05.621 user 0m7.916s 00:04:05.621 sys 0m14.499s 00:04:05.621 00:19:31 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:05.621 00:19:31 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:05.621 ************************************ 00:04:05.621 END TEST hugepages 00:04:05.621 ************************************ 00:04:05.621 00:19:31 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:04:05.621 00:19:31 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:05.621 00:19:31 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:05.622 00:19:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:05.622 ************************************ 00:04:05.622 START TEST driver 00:04:05.622 ************************************ 00:04:05.622 00:19:31 setup.sh.driver -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:04:05.622 * Looking for test storage... 00:04:05.622 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:04:05.622 00:19:31 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:05.622 00:19:31 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:05.622 00:19:31 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:04:10.896 00:19:37 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:10.896 00:19:37 setup.sh.driver -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:10.896 00:19:37 setup.sh.driver -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:10.896 00:19:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:11.156 ************************************ 00:04:11.156 START TEST guess_driver 00:04:11.156 ************************************ 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1122 -- # guess_driver 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 334 > 0 )) 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:11.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:11.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:11.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:11.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:11.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:11.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:11.156 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:11.156 Looking for driver=vfio-pci 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.156 00:19:37 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.696 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.957 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.957 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.957 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.957 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.957 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.957 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.957 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.957 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.957 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.957 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.957 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.957 00:19:39 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.957 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.957 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.957 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.957 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.957 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.957 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:13.957 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:13.957 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:13.957 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.218 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.218 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.218 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:14.218 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:14.218 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:14.218 00:19:40 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.602 00:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.602 00:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.602 00:19:41 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:15.863 00:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:15.863 00:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:15.863 00:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:16.431 00:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:16.431 00:19:42 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:16.431 00:19:42 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.431 00:19:42 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.718 00:04:21.718 real 0m10.786s 00:04:21.719 user 0m2.166s 00:04:21.719 sys 0m4.401s 00:04:21.719 00:19:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:21.719 00:19:47 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:21.719 ************************************ 00:04:21.719 END TEST guess_driver 00:04:21.719 ************************************ 00:04:21.979 00:04:21.979 real 0m16.301s 00:04:21.979 user 0m3.399s 00:04:21.979 sys 0m6.886s 00:04:21.979 00:19:47 setup.sh.driver -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:21.979 00:19:47 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:21.979 ************************************ 00:04:21.979 END TEST driver 00:04:21.979 ************************************ 00:04:21.979 00:19:47 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:04:21.979 00:19:47 setup.sh -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:21.979 00:19:47 setup.sh -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:21.979 00:19:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.979 ************************************ 00:04:21.979 START TEST devices 00:04:21.979 ************************************ 00:04:21.979 00:19:47 setup.sh.devices -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:04:21.979 * Looking for test storage... 00:04:21.979 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:04:21.979 00:19:48 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:21.979 00:19:48 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:21.979 00:19:48 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:21.979 00:19:48 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:26.189 00:19:51 setup.sh.devices -- common/autotest_common.sh@1666 -- # zoned_devs=() 00:04:26.189 00:19:51 setup.sh.devices -- common/autotest_common.sh@1666 -- # local -gA zoned_devs 00:04:26.189 00:19:51 setup.sh.devices -- common/autotest_common.sh@1667 -- # local nvme bdf 00:04:26.189 00:19:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:04:26.189 00:19:51 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme0n1 00:04:26.189 00:19:51 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:04:26.189 00:19:51 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.189 00:19:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:04:26.189 00:19:51 setup.sh.devices -- common/autotest_common.sh@1669 -- # for nvme in /sys/block/nvme* 00:04:26.189 00:19:51 setup.sh.devices -- common/autotest_common.sh@1670 -- # is_block_zoned nvme1n1 00:04:26.189 00:19:51 setup.sh.devices -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:04:26.189 00:19:51 setup.sh.devices -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:26.189 00:19:51 setup.sh.devices -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:c9:00.0 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:26.189 00:19:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:26.189 00:19:51 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:26.189 No valid GPT data, bailing 00:04:26.189 00:19:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:26.189 00:19:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:26.189 00:19:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:26.189 00:19:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:26.189 00:19:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:26.189 00:19:51 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:c9:00.0 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme1 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:ca:00.0 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\a\:\0\0\.\0* ]] 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:04:26.189 00:19:51 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:04:26.189 00:19:51 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:04:26.189 No valid GPT data, bailing 00:04:26.189 00:19:51 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:26.189 00:19:51 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:26.189 00:19:51 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:04:26.189 00:19:51 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme1n1 00:04:26.189 00:19:51 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:04:26.189 00:19:51 setup.sh.devices -- setup/common.sh@80 -- # echo 2000398934016 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:04:26.189 00:19:51 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:26.190 00:19:51 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:ca:00.0 00:04:26.190 00:19:51 setup.sh.devices -- setup/devices.sh@209 -- # (( 2 > 0 )) 00:04:26.190 00:19:51 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:26.190 00:19:51 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:26.190 00:19:51 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:26.190 00:19:51 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:26.190 00:19:51 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:26.190 ************************************ 00:04:26.190 START TEST nvme_mount 00:04:26.190 ************************************ 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1122 -- # nvme_mount 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:26.190 00:19:51 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:26.769 Creating new GPT entries in memory. 00:04:26.769 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:26.769 other utilities. 00:04:26.769 00:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:26.769 00:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:26.769 00:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:26.769 00:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:26.769 00:19:52 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:27.710 Creating new GPT entries in memory. 00:04:27.710 The operation has completed successfully. 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1767327 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.710 00:19:53 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.252 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:30.253 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:30.824 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:30.824 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:30.824 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.824 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:30.824 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:30.824 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:30.824 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.824 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:30.824 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:30.824 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:30.824 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:30.824 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:30.824 00:19:56 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:31.085 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:31.085 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:31.085 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:31.085 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.085 00:19:57 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:33.629 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:c9:00.0 data@nvme0n1 '' '' 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.890 00:19:59 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.426 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.427 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.427 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.427 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.427 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.427 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.427 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.427 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.686 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.686 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.686 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.686 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.686 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.686 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:36.686 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:36.686 00:20:02 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:37.255 00:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:37.255 00:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:37.255 00:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:37.255 00:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:37.255 00:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:37.255 00:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:37.255 00:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:37.255 00:20:03 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:37.255 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:37.255 00:04:37.255 real 0m11.603s 00:04:37.255 user 0m2.908s 00:04:37.255 sys 0m5.824s 00:04:37.255 00:20:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:37.255 00:20:03 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:37.255 ************************************ 00:04:37.255 END TEST nvme_mount 00:04:37.255 ************************************ 00:04:37.255 00:20:03 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:37.255 00:20:03 setup.sh.devices -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:04:37.255 00:20:03 setup.sh.devices -- common/autotest_common.sh@1104 -- # xtrace_disable 00:04:37.255 00:20:03 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:37.255 ************************************ 00:04:37.255 START TEST dm_mount 00:04:37.255 ************************************ 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- common/autotest_common.sh@1122 -- # dm_mount 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:37.255 00:20:03 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:38.191 Creating new GPT entries in memory. 00:04:38.191 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:38.191 other utilities. 00:04:38.191 00:20:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:38.191 00:20:04 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:38.191 00:20:04 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:38.191 00:20:04 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:38.191 00:20:04 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:39.197 Creating new GPT entries in memory. 00:04:39.197 The operation has completed successfully. 00:04:39.197 00:20:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:39.197 00:20:05 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:39.197 00:20:05 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:39.197 00:20:05 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:39.197 00:20:05 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:40.577 The operation has completed successfully. 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1772122 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount size= 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:c9:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:40.577 00:20:06 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:08 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.119 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:43.119 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:c9:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.689 00:20:09 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:04:46.228 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.798 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.798 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:46.798 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:46.798 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:46.798 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:46.798 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:46.798 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:46.798 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.798 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:46.798 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:46.798 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:46.798 00:20:12 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:46.798 00:04:46.798 real 0m9.525s 00:04:46.798 user 0m1.976s 00:04:46.798 sys 0m4.124s 00:04:46.798 00:20:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:46.798 00:20:12 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:46.798 ************************************ 00:04:46.798 END TEST dm_mount 00:04:46.798 ************************************ 00:04:46.798 00:20:12 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:46.798 00:20:12 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:46.798 00:20:12 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.798 00:20:12 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:46.798 00:20:12 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:46.798 00:20:12 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:46.798 00:20:12 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.057 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:47.057 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:04:47.057 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:47.057 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:47.057 00:20:13 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:47.057 00:20:13 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:04:47.057 00:20:13 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:47.057 00:20:13 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.057 00:20:13 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:47.057 00:20:13 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.057 00:20:13 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:47.057 00:04:47.057 real 0m25.182s 00:04:47.057 user 0m6.104s 00:04:47.057 sys 0m12.419s 00:04:47.057 00:20:13 setup.sh.devices -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:47.057 00:20:13 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.057 ************************************ 00:04:47.057 END TEST devices 00:04:47.057 ************************************ 00:04:47.057 00:04:47.057 real 1m33.007s 00:04:47.057 user 0m23.950s 00:04:47.057 sys 0m46.692s 00:04:47.057 00:20:13 setup.sh -- common/autotest_common.sh@1123 -- # xtrace_disable 00:04:47.057 00:20:13 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:47.057 ************************************ 00:04:47.057 END TEST setup.sh 00:04:47.057 ************************************ 00:04:47.057 00:20:13 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:04:50.348 Hugepages 00:04:50.348 node hugesize free / total 00:04:50.348 node0 1048576kB 0 / 0 00:04:50.348 node0 2048kB 2048 / 2048 00:04:50.348 node1 1048576kB 0 / 0 00:04:50.348 node1 2048kB 0 / 0 00:04:50.348 00:04:50.348 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:50.349 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:04:50.349 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:04:50.349 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:04:50.349 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:04:50.349 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:04:50.349 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:04:50.349 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:04:50.349 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:04:50.349 NVMe 0000:c9:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:50.349 NVMe 0000:ca:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:04:50.349 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:04:50.349 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:04:50.349 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:04:50.349 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:04:50.349 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:04:50.349 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:04:50.349 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:04:50.349 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:04:50.349 00:20:16 -- spdk/autotest.sh@130 -- # uname -s 00:04:50.349 00:20:16 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:50.349 00:20:16 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:50.349 00:20:16 -- common/autotest_common.sh@1528 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:04:52.889 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:53.150 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:53.150 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:53.150 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:04:53.150 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:53.150 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:04:53.150 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:53.150 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:04:53.411 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:53.411 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:04:53.411 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:04:53.411 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:04:53.411 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:53.411 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:04:53.411 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:04:53.411 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:04:55.328 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:04:55.328 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:04:55.901 00:20:21 -- common/autotest_common.sh@1529 -- # sleep 1 00:04:56.842 00:20:22 -- common/autotest_common.sh@1530 -- # bdfs=() 00:04:56.842 00:20:22 -- common/autotest_common.sh@1530 -- # local bdfs 00:04:56.842 00:20:22 -- common/autotest_common.sh@1531 -- # bdfs=($(get_nvme_bdfs)) 00:04:56.842 00:20:22 -- common/autotest_common.sh@1531 -- # get_nvme_bdfs 00:04:56.842 00:20:22 -- common/autotest_common.sh@1510 -- # bdfs=() 00:04:56.842 00:20:22 -- common/autotest_common.sh@1510 -- # local bdfs 00:04:56.842 00:20:22 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:56.842 00:20:22 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:56.842 00:20:22 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:04:56.842 00:20:22 -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:04:56.842 00:20:22 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:04:56.842 00:20:22 -- common/autotest_common.sh@1533 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:05:00.136 Waiting for block devices as requested 00:05:00.136 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:05:00.136 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:05:00.136 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:05:00.136 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:05:00.136 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:05:00.395 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:05:00.395 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:05:00.655 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:05:00.655 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:05:00.915 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:05:00.915 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:05:00.915 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:05:01.177 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:05:01.177 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:05:01.437 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:05:01.437 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:05:01.697 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:05:01.697 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:05:02.267 00:20:28 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:05:02.267 00:20:28 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:c9:00.0 00:05:02.267 00:20:28 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:02.267 00:20:28 -- common/autotest_common.sh@1499 -- # grep 0000:c9:00.0/nvme/nvme 00:05:02.267 00:20:28 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:05:02.267 00:20:28 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 ]] 00:05:02.267 00:20:28 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:05:02.267 00:20:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme0 00:05:02.267 00:20:28 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme0 00:05:02.267 00:20:28 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme0 ]] 00:05:02.267 00:20:28 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme0 00:05:02.267 00:20:28 -- common/autotest_common.sh@1542 -- # grep oacs 00:05:02.267 00:20:28 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:05:02.267 00:20:28 -- common/autotest_common.sh@1542 -- # oacs=' 0xe' 00:05:02.267 00:20:28 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:05:02.267 00:20:28 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:05:02.267 00:20:28 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme0 00:05:02.267 00:20:28 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:05:02.267 00:20:28 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:05:02.267 00:20:28 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:05:02.267 00:20:28 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:05:02.267 00:20:28 -- common/autotest_common.sh@1554 -- # continue 00:05:02.267 00:20:28 -- common/autotest_common.sh@1535 -- # for bdf in "${bdfs[@]}" 00:05:02.267 00:20:28 -- common/autotest_common.sh@1536 -- # get_nvme_ctrlr_from_bdf 0000:ca:00.0 00:05:02.267 00:20:28 -- common/autotest_common.sh@1499 -- # grep 0000:ca:00.0/nvme/nvme 00:05:02.267 00:20:28 -- common/autotest_common.sh@1499 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:02.267 00:20:28 -- common/autotest_common.sh@1499 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme1 00:05:02.267 00:20:28 -- common/autotest_common.sh@1500 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme1 ]] 00:05:02.267 00:20:28 -- common/autotest_common.sh@1504 -- # basename /sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme1 00:05:02.267 00:20:28 -- common/autotest_common.sh@1504 -- # printf '%s\n' nvme1 00:05:02.267 00:20:28 -- common/autotest_common.sh@1536 -- # nvme_ctrlr=/dev/nvme1 00:05:02.267 00:20:28 -- common/autotest_common.sh@1537 -- # [[ -z /dev/nvme1 ]] 00:05:02.267 00:20:28 -- common/autotest_common.sh@1542 -- # nvme id-ctrl /dev/nvme1 00:05:02.267 00:20:28 -- common/autotest_common.sh@1542 -- # grep oacs 00:05:02.267 00:20:28 -- common/autotest_common.sh@1542 -- # cut -d: -f2 00:05:02.267 00:20:28 -- common/autotest_common.sh@1542 -- # oacs=' 0xe' 00:05:02.267 00:20:28 -- common/autotest_common.sh@1543 -- # oacs_ns_manage=8 00:05:02.267 00:20:28 -- common/autotest_common.sh@1545 -- # [[ 8 -ne 0 ]] 00:05:02.267 00:20:28 -- common/autotest_common.sh@1551 -- # nvme id-ctrl /dev/nvme1 00:05:02.267 00:20:28 -- common/autotest_common.sh@1551 -- # grep unvmcap 00:05:02.267 00:20:28 -- common/autotest_common.sh@1551 -- # cut -d: -f2 00:05:02.267 00:20:28 -- common/autotest_common.sh@1551 -- # unvmcap=' 0' 00:05:02.267 00:20:28 -- common/autotest_common.sh@1552 -- # [[ 0 -eq 0 ]] 00:05:02.267 00:20:28 -- common/autotest_common.sh@1554 -- # continue 00:05:02.267 00:20:28 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:02.267 00:20:28 -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:02.267 00:20:28 -- common/autotest_common.sh@10 -- # set +x 00:05:02.267 00:20:28 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:02.267 00:20:28 -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:02.267 00:20:28 -- common/autotest_common.sh@10 -- # set +x 00:05:02.267 00:20:28 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:05:05.563 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:05:05.563 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:05:05.563 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:05:05.563 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:05:05.563 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:05:05.563 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:05:05.563 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:05:05.563 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:05:05.563 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:05:05.563 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:05:05.563 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:05:05.563 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:05:05.563 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:05:05.563 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:05:05.563 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:05:05.823 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:05:07.208 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:05:07.467 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:05:08.037 00:20:34 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:08.037 00:20:34 -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:08.037 00:20:34 -- common/autotest_common.sh@10 -- # set +x 00:05:08.037 00:20:34 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:08.037 00:20:34 -- common/autotest_common.sh@1588 -- # mapfile -t bdfs 00:05:08.037 00:20:34 -- common/autotest_common.sh@1588 -- # get_nvme_bdfs_by_id 0x0a54 00:05:08.037 00:20:34 -- common/autotest_common.sh@1574 -- # bdfs=() 00:05:08.037 00:20:34 -- common/autotest_common.sh@1574 -- # local bdfs 00:05:08.037 00:20:34 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs 00:05:08.037 00:20:34 -- common/autotest_common.sh@1510 -- # bdfs=() 00:05:08.037 00:20:34 -- common/autotest_common.sh@1510 -- # local bdfs 00:05:08.037 00:20:34 -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:08.037 00:20:34 -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:05:08.037 00:20:34 -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:08.037 00:20:34 -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:05:08.037 00:20:34 -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:05:08.037 00:20:34 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:05:08.037 00:20:34 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:c9:00.0/device 00:05:08.037 00:20:34 -- common/autotest_common.sh@1577 -- # device=0x0a54 00:05:08.037 00:20:34 -- common/autotest_common.sh@1578 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:08.037 00:20:34 -- common/autotest_common.sh@1579 -- # bdfs+=($bdf) 00:05:08.037 00:20:34 -- common/autotest_common.sh@1576 -- # for bdf in $(get_nvme_bdfs) 00:05:08.037 00:20:34 -- common/autotest_common.sh@1577 -- # cat /sys/bus/pci/devices/0000:ca:00.0/device 00:05:08.037 00:20:34 -- common/autotest_common.sh@1577 -- # device=0x0a54 00:05:08.037 00:20:34 -- common/autotest_common.sh@1578 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:08.037 00:20:34 -- common/autotest_common.sh@1579 -- # bdfs+=($bdf) 00:05:08.037 00:20:34 -- common/autotest_common.sh@1583 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:05:08.037 00:20:34 -- common/autotest_common.sh@1589 -- # [[ -z 0000:c9:00.0 ]] 00:05:08.037 00:20:34 -- common/autotest_common.sh@1594 -- # spdk_tgt_pid=1783308 00:05:08.037 00:20:34 -- common/autotest_common.sh@1595 -- # waitforlisten 1783308 00:05:08.037 00:20:34 -- common/autotest_common.sh@1593 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:08.037 00:20:34 -- common/autotest_common.sh@828 -- # '[' -z 1783308 ']' 00:05:08.037 00:20:34 -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.037 00:20:34 -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:08.037 00:20:34 -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.037 00:20:34 -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:08.037 00:20:34 -- common/autotest_common.sh@10 -- # set +x 00:05:08.297 [2024-05-15 00:20:34.242808] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:08.297 [2024-05-15 00:20:34.242932] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1783308 ] 00:05:08.297 EAL: No free 2048 kB hugepages reported on node 1 00:05:08.297 [2024-05-15 00:20:34.365183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.558 [2024-05-15 00:20:34.469005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.818 00:20:34 -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:08.818 00:20:34 -- common/autotest_common.sh@861 -- # return 0 00:05:08.818 00:20:34 -- common/autotest_common.sh@1597 -- # bdf_id=0 00:05:08.818 00:20:34 -- common/autotest_common.sh@1598 -- # for bdf in "${bdfs[@]}" 00:05:08.818 00:20:34 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:c9:00.0 00:05:12.114 nvme0n1 00:05:12.114 00:20:37 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:12.114 [2024-05-15 00:20:38.009664] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:12.114 request: 00:05:12.114 { 00:05:12.114 "nvme_ctrlr_name": "nvme0", 00:05:12.114 "password": "test", 00:05:12.114 "method": "bdev_nvme_opal_revert", 00:05:12.114 "req_id": 1 00:05:12.114 } 00:05:12.114 Got JSON-RPC error response 00:05:12.114 response: 00:05:12.114 { 00:05:12.114 "code": -32602, 00:05:12.114 "message": "Invalid parameters" 00:05:12.114 } 00:05:12.114 00:20:38 -- common/autotest_common.sh@1601 -- # true 00:05:12.114 00:20:38 -- common/autotest_common.sh@1602 -- # (( ++bdf_id )) 00:05:12.114 00:20:38 -- common/autotest_common.sh@1598 -- # for bdf in "${bdfs[@]}" 00:05:12.114 00:20:38 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme1 -t pcie -a 0000:ca:00.0 00:05:15.479 nvme1n1 00:05:15.479 00:20:40 -- common/autotest_common.sh@1601 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme1 -p test 00:05:15.479 [2024-05-15 00:20:41.117602] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme1 not support opal 00:05:15.479 request: 00:05:15.479 { 00:05:15.479 "nvme_ctrlr_name": "nvme1", 00:05:15.479 "password": "test", 00:05:15.479 "method": "bdev_nvme_opal_revert", 00:05:15.479 "req_id": 1 00:05:15.479 } 00:05:15.479 Got JSON-RPC error response 00:05:15.479 response: 00:05:15.479 { 00:05:15.479 "code": -32602, 00:05:15.479 "message": "Invalid parameters" 00:05:15.479 } 00:05:15.479 00:20:41 -- common/autotest_common.sh@1601 -- # true 00:05:15.479 00:20:41 -- common/autotest_common.sh@1602 -- # (( ++bdf_id )) 00:05:15.479 00:20:41 -- common/autotest_common.sh@1605 -- # killprocess 1783308 00:05:15.479 00:20:41 -- common/autotest_common.sh@947 -- # '[' -z 1783308 ']' 00:05:15.479 00:20:41 -- common/autotest_common.sh@951 -- # kill -0 1783308 00:05:15.479 00:20:41 -- common/autotest_common.sh@952 -- # uname 00:05:15.480 00:20:41 -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:15.480 00:20:41 -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1783308 00:05:15.480 00:20:41 -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:15.480 00:20:41 -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:15.480 00:20:41 -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1783308' 00:05:15.480 killing process with pid 1783308 00:05:15.480 00:20:41 -- common/autotest_common.sh@966 -- # kill 1783308 00:05:15.480 00:20:41 -- common/autotest_common.sh@971 -- # wait 1783308 00:05:18.777 00:20:44 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:18.777 00:20:44 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:18.777 00:20:44 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:18.777 00:20:44 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:18.777 00:20:44 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:18.777 00:20:44 -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:18.777 00:20:44 -- common/autotest_common.sh@10 -- # set +x 00:05:18.778 00:20:44 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:05:18.778 00:20:44 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:18.778 00:20:44 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:18.778 00:20:44 -- common/autotest_common.sh@10 -- # set +x 00:05:18.778 ************************************ 00:05:18.778 START TEST env 00:05:18.778 ************************************ 00:05:18.778 00:20:44 env -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:05:18.778 * Looking for test storage... 00:05:18.778 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env 00:05:18.778 00:20:44 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:05:18.778 00:20:44 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:18.778 00:20:44 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:18.778 00:20:44 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.778 ************************************ 00:05:18.778 START TEST env_memory 00:05:18.778 ************************************ 00:05:18.778 00:20:44 env.env_memory -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:05:18.778 00:05:18.778 00:05:18.778 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.778 http://cunit.sourceforge.net/ 00:05:18.778 00:05:18.778 00:05:18.778 Suite: memory 00:05:18.778 Test: alloc and free memory map ...[2024-05-15 00:20:44.821242] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:18.778 passed 00:05:18.778 Test: mem map translation ...[2024-05-15 00:20:44.868554] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:18.778 [2024-05-15 00:20:44.868609] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:18.778 [2024-05-15 00:20:44.868691] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:18.778 [2024-05-15 00:20:44.868710] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:18.778 passed 00:05:19.039 Test: mem map registration ...[2024-05-15 00:20:44.954929] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:19.039 [2024-05-15 00:20:44.954963] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:19.039 passed 00:05:19.039 Test: mem map adjacent registrations ...passed 00:05:19.039 00:05:19.039 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.039 suites 1 1 n/a 0 0 00:05:19.039 tests 4 4 4 0 0 00:05:19.039 asserts 152 152 152 0 n/a 00:05:19.039 00:05:19.039 Elapsed time = 0.293 seconds 00:05:19.039 00:05:19.039 real 0m0.318s 00:05:19.039 user 0m0.300s 00:05:19.039 sys 0m0.017s 00:05:19.039 00:20:45 env.env_memory -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:19.039 00:20:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:19.039 ************************************ 00:05:19.039 END TEST env_memory 00:05:19.039 ************************************ 00:05:19.039 00:20:45 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.039 00:20:45 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:19.039 00:20:45 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:19.039 00:20:45 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.039 ************************************ 00:05:19.039 START TEST env_vtophys 00:05:19.039 ************************************ 00:05:19.039 00:20:45 env.env_vtophys -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:19.039 EAL: lib.eal log level changed from notice to debug 00:05:19.039 EAL: Detected lcore 0 as core 0 on socket 0 00:05:19.039 EAL: Detected lcore 1 as core 1 on socket 0 00:05:19.039 EAL: Detected lcore 2 as core 2 on socket 0 00:05:19.039 EAL: Detected lcore 3 as core 3 on socket 0 00:05:19.039 EAL: Detected lcore 4 as core 4 on socket 0 00:05:19.039 EAL: Detected lcore 5 as core 5 on socket 0 00:05:19.039 EAL: Detected lcore 6 as core 6 on socket 0 00:05:19.039 EAL: Detected lcore 7 as core 7 on socket 0 00:05:19.039 EAL: Detected lcore 8 as core 8 on socket 0 00:05:19.039 EAL: Detected lcore 9 as core 9 on socket 0 00:05:19.039 EAL: Detected lcore 10 as core 10 on socket 0 00:05:19.039 EAL: Detected lcore 11 as core 11 on socket 0 00:05:19.039 EAL: Detected lcore 12 as core 12 on socket 0 00:05:19.039 EAL: Detected lcore 13 as core 13 on socket 0 00:05:19.039 EAL: Detected lcore 14 as core 14 on socket 0 00:05:19.039 EAL: Detected lcore 15 as core 15 on socket 0 00:05:19.039 EAL: Detected lcore 16 as core 16 on socket 0 00:05:19.039 EAL: Detected lcore 17 as core 17 on socket 0 00:05:19.039 EAL: Detected lcore 18 as core 18 on socket 0 00:05:19.039 EAL: Detected lcore 19 as core 19 on socket 0 00:05:19.039 EAL: Detected lcore 20 as core 20 on socket 0 00:05:19.039 EAL: Detected lcore 21 as core 21 on socket 0 00:05:19.039 EAL: Detected lcore 22 as core 22 on socket 0 00:05:19.039 EAL: Detected lcore 23 as core 23 on socket 0 00:05:19.039 EAL: Detected lcore 24 as core 24 on socket 0 00:05:19.039 EAL: Detected lcore 25 as core 25 on socket 0 00:05:19.039 EAL: Detected lcore 26 as core 26 on socket 0 00:05:19.039 EAL: Detected lcore 27 as core 27 on socket 0 00:05:19.039 EAL: Detected lcore 28 as core 28 on socket 0 00:05:19.039 EAL: Detected lcore 29 as core 29 on socket 0 00:05:19.039 EAL: Detected lcore 30 as core 30 on socket 0 00:05:19.039 EAL: Detected lcore 31 as core 31 on socket 0 00:05:19.039 EAL: Detected lcore 32 as core 0 on socket 1 00:05:19.039 EAL: Detected lcore 33 as core 1 on socket 1 00:05:19.039 EAL: Detected lcore 34 as core 2 on socket 1 00:05:19.039 EAL: Detected lcore 35 as core 3 on socket 1 00:05:19.039 EAL: Detected lcore 36 as core 4 on socket 1 00:05:19.039 EAL: Detected lcore 37 as core 5 on socket 1 00:05:19.039 EAL: Detected lcore 38 as core 6 on socket 1 00:05:19.039 EAL: Detected lcore 39 as core 7 on socket 1 00:05:19.039 EAL: Detected lcore 40 as core 8 on socket 1 00:05:19.039 EAL: Detected lcore 41 as core 9 on socket 1 00:05:19.039 EAL: Detected lcore 42 as core 10 on socket 1 00:05:19.039 EAL: Detected lcore 43 as core 11 on socket 1 00:05:19.039 EAL: Detected lcore 44 as core 12 on socket 1 00:05:19.039 EAL: Detected lcore 45 as core 13 on socket 1 00:05:19.039 EAL: Detected lcore 46 as core 14 on socket 1 00:05:19.039 EAL: Detected lcore 47 as core 15 on socket 1 00:05:19.039 EAL: Detected lcore 48 as core 16 on socket 1 00:05:19.039 EAL: Detected lcore 49 as core 17 on socket 1 00:05:19.039 EAL: Detected lcore 50 as core 18 on socket 1 00:05:19.039 EAL: Detected lcore 51 as core 19 on socket 1 00:05:19.039 EAL: Detected lcore 52 as core 20 on socket 1 00:05:19.039 EAL: Detected lcore 53 as core 21 on socket 1 00:05:19.039 EAL: Detected lcore 54 as core 22 on socket 1 00:05:19.039 EAL: Detected lcore 55 as core 23 on socket 1 00:05:19.039 EAL: Detected lcore 56 as core 24 on socket 1 00:05:19.039 EAL: Detected lcore 57 as core 25 on socket 1 00:05:19.039 EAL: Detected lcore 58 as core 26 on socket 1 00:05:19.039 EAL: Detected lcore 59 as core 27 on socket 1 00:05:19.039 EAL: Detected lcore 60 as core 28 on socket 1 00:05:19.039 EAL: Detected lcore 61 as core 29 on socket 1 00:05:19.039 EAL: Detected lcore 62 as core 30 on socket 1 00:05:19.039 EAL: Detected lcore 63 as core 31 on socket 1 00:05:19.039 EAL: Detected lcore 64 as core 0 on socket 0 00:05:19.039 EAL: Detected lcore 65 as core 1 on socket 0 00:05:19.039 EAL: Detected lcore 66 as core 2 on socket 0 00:05:19.039 EAL: Detected lcore 67 as core 3 on socket 0 00:05:19.039 EAL: Detected lcore 68 as core 4 on socket 0 00:05:19.039 EAL: Detected lcore 69 as core 5 on socket 0 00:05:19.039 EAL: Detected lcore 70 as core 6 on socket 0 00:05:19.039 EAL: Detected lcore 71 as core 7 on socket 0 00:05:19.039 EAL: Detected lcore 72 as core 8 on socket 0 00:05:19.039 EAL: Detected lcore 73 as core 9 on socket 0 00:05:19.039 EAL: Detected lcore 74 as core 10 on socket 0 00:05:19.039 EAL: Detected lcore 75 as core 11 on socket 0 00:05:19.039 EAL: Detected lcore 76 as core 12 on socket 0 00:05:19.039 EAL: Detected lcore 77 as core 13 on socket 0 00:05:19.039 EAL: Detected lcore 78 as core 14 on socket 0 00:05:19.039 EAL: Detected lcore 79 as core 15 on socket 0 00:05:19.039 EAL: Detected lcore 80 as core 16 on socket 0 00:05:19.039 EAL: Detected lcore 81 as core 17 on socket 0 00:05:19.039 EAL: Detected lcore 82 as core 18 on socket 0 00:05:19.039 EAL: Detected lcore 83 as core 19 on socket 0 00:05:19.039 EAL: Detected lcore 84 as core 20 on socket 0 00:05:19.039 EAL: Detected lcore 85 as core 21 on socket 0 00:05:19.039 EAL: Detected lcore 86 as core 22 on socket 0 00:05:19.039 EAL: Detected lcore 87 as core 23 on socket 0 00:05:19.039 EAL: Detected lcore 88 as core 24 on socket 0 00:05:19.039 EAL: Detected lcore 89 as core 25 on socket 0 00:05:19.039 EAL: Detected lcore 90 as core 26 on socket 0 00:05:19.039 EAL: Detected lcore 91 as core 27 on socket 0 00:05:19.039 EAL: Detected lcore 92 as core 28 on socket 0 00:05:19.039 EAL: Detected lcore 93 as core 29 on socket 0 00:05:19.039 EAL: Detected lcore 94 as core 30 on socket 0 00:05:19.039 EAL: Detected lcore 95 as core 31 on socket 0 00:05:19.039 EAL: Detected lcore 96 as core 0 on socket 1 00:05:19.039 EAL: Detected lcore 97 as core 1 on socket 1 00:05:19.039 EAL: Detected lcore 98 as core 2 on socket 1 00:05:19.039 EAL: Detected lcore 99 as core 3 on socket 1 00:05:19.039 EAL: Detected lcore 100 as core 4 on socket 1 00:05:19.039 EAL: Detected lcore 101 as core 5 on socket 1 00:05:19.039 EAL: Detected lcore 102 as core 6 on socket 1 00:05:19.039 EAL: Detected lcore 103 as core 7 on socket 1 00:05:19.039 EAL: Detected lcore 104 as core 8 on socket 1 00:05:19.039 EAL: Detected lcore 105 as core 9 on socket 1 00:05:19.039 EAL: Detected lcore 106 as core 10 on socket 1 00:05:19.039 EAL: Detected lcore 107 as core 11 on socket 1 00:05:19.039 EAL: Detected lcore 108 as core 12 on socket 1 00:05:19.039 EAL: Detected lcore 109 as core 13 on socket 1 00:05:19.039 EAL: Detected lcore 110 as core 14 on socket 1 00:05:19.039 EAL: Detected lcore 111 as core 15 on socket 1 00:05:19.039 EAL: Detected lcore 112 as core 16 on socket 1 00:05:19.039 EAL: Detected lcore 113 as core 17 on socket 1 00:05:19.039 EAL: Detected lcore 114 as core 18 on socket 1 00:05:19.039 EAL: Detected lcore 115 as core 19 on socket 1 00:05:19.039 EAL: Detected lcore 116 as core 20 on socket 1 00:05:19.039 EAL: Detected lcore 117 as core 21 on socket 1 00:05:19.039 EAL: Detected lcore 118 as core 22 on socket 1 00:05:19.039 EAL: Detected lcore 119 as core 23 on socket 1 00:05:19.039 EAL: Detected lcore 120 as core 24 on socket 1 00:05:19.039 EAL: Detected lcore 121 as core 25 on socket 1 00:05:19.039 EAL: Detected lcore 122 as core 26 on socket 1 00:05:19.039 EAL: Detected lcore 123 as core 27 on socket 1 00:05:19.039 EAL: Detected lcore 124 as core 28 on socket 1 00:05:19.039 EAL: Detected lcore 125 as core 29 on socket 1 00:05:19.039 EAL: Detected lcore 126 as core 30 on socket 1 00:05:19.039 EAL: Detected lcore 127 as core 31 on socket 1 00:05:19.039 EAL: Maximum logical cores by configuration: 128 00:05:19.039 EAL: Detected CPU lcores: 128 00:05:19.039 EAL: Detected NUMA nodes: 2 00:05:19.039 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:19.039 EAL: Detected shared linkage of DPDK 00:05:19.300 EAL: No shared files mode enabled, IPC will be disabled 00:05:19.300 EAL: Bus pci wants IOVA as 'DC' 00:05:19.300 EAL: Buses did not request a specific IOVA mode. 00:05:19.300 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:19.300 EAL: Selected IOVA mode 'VA' 00:05:19.301 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.301 EAL: Probing VFIO support... 00:05:19.301 EAL: IOMMU type 1 (Type 1) is supported 00:05:19.301 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:19.301 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:19.301 EAL: VFIO support initialized 00:05:19.301 EAL: Ask a virtual area of 0x2e000 bytes 00:05:19.301 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:19.301 EAL: Setting up physically contiguous memory... 00:05:19.301 EAL: Setting maximum number of open files to 524288 00:05:19.301 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:19.301 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:19.301 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:19.301 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.301 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:19.301 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.301 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.301 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:19.301 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:19.301 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.301 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:19.301 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.301 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.301 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:19.301 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:19.301 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.301 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:19.301 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.301 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.301 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:19.301 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:19.301 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.301 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:19.301 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:19.301 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.301 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:19.301 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:19.301 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:19.301 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.301 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:19.301 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.301 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.301 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:19.301 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:19.301 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.301 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:19.301 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.301 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.301 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:19.301 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:19.301 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.301 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:19.301 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.301 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.301 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:19.301 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:19.301 EAL: Ask a virtual area of 0x61000 bytes 00:05:19.301 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:19.301 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:19.301 EAL: Ask a virtual area of 0x400000000 bytes 00:05:19.301 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:19.301 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:19.301 EAL: Hugepages will be freed exactly as allocated. 00:05:19.301 EAL: No shared files mode enabled, IPC is disabled 00:05:19.301 EAL: No shared files mode enabled, IPC is disabled 00:05:19.301 EAL: TSC frequency is ~1900000 KHz 00:05:19.301 EAL: Main lcore 0 is ready (tid=7f9643623a40;cpuset=[0]) 00:05:19.301 EAL: Trying to obtain current memory policy. 00:05:19.301 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.301 EAL: Restoring previous memory policy: 0 00:05:19.301 EAL: request: mp_malloc_sync 00:05:19.301 EAL: No shared files mode enabled, IPC is disabled 00:05:19.301 EAL: Heap on socket 0 was expanded by 2MB 00:05:19.301 EAL: No shared files mode enabled, IPC is disabled 00:05:19.301 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:19.301 EAL: Mem event callback 'spdk:(nil)' registered 00:05:19.301 00:05:19.301 00:05:19.301 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.301 http://cunit.sourceforge.net/ 00:05:19.301 00:05:19.301 00:05:19.301 Suite: components_suite 00:05:19.562 Test: vtophys_malloc_test ...passed 00:05:19.562 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:19.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.562 EAL: Restoring previous memory policy: 4 00:05:19.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.562 EAL: request: mp_malloc_sync 00:05:19.562 EAL: No shared files mode enabled, IPC is disabled 00:05:19.562 EAL: Heap on socket 0 was expanded by 4MB 00:05:19.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.562 EAL: request: mp_malloc_sync 00:05:19.562 EAL: No shared files mode enabled, IPC is disabled 00:05:19.562 EAL: Heap on socket 0 was shrunk by 4MB 00:05:19.562 EAL: Trying to obtain current memory policy. 00:05:19.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.562 EAL: Restoring previous memory policy: 4 00:05:19.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.562 EAL: request: mp_malloc_sync 00:05:19.562 EAL: No shared files mode enabled, IPC is disabled 00:05:19.562 EAL: Heap on socket 0 was expanded by 6MB 00:05:19.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.562 EAL: request: mp_malloc_sync 00:05:19.562 EAL: No shared files mode enabled, IPC is disabled 00:05:19.562 EAL: Heap on socket 0 was shrunk by 6MB 00:05:19.562 EAL: Trying to obtain current memory policy. 00:05:19.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.562 EAL: Restoring previous memory policy: 4 00:05:19.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.562 EAL: request: mp_malloc_sync 00:05:19.562 EAL: No shared files mode enabled, IPC is disabled 00:05:19.562 EAL: Heap on socket 0 was expanded by 10MB 00:05:19.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.562 EAL: request: mp_malloc_sync 00:05:19.562 EAL: No shared files mode enabled, IPC is disabled 00:05:19.562 EAL: Heap on socket 0 was shrunk by 10MB 00:05:19.562 EAL: Trying to obtain current memory policy. 00:05:19.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.562 EAL: Restoring previous memory policy: 4 00:05:19.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.562 EAL: request: mp_malloc_sync 00:05:19.562 EAL: No shared files mode enabled, IPC is disabled 00:05:19.562 EAL: Heap on socket 0 was expanded by 18MB 00:05:19.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.562 EAL: request: mp_malloc_sync 00:05:19.562 EAL: No shared files mode enabled, IPC is disabled 00:05:19.562 EAL: Heap on socket 0 was shrunk by 18MB 00:05:19.562 EAL: Trying to obtain current memory policy. 00:05:19.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.562 EAL: Restoring previous memory policy: 4 00:05:19.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.562 EAL: request: mp_malloc_sync 00:05:19.562 EAL: No shared files mode enabled, IPC is disabled 00:05:19.562 EAL: Heap on socket 0 was expanded by 34MB 00:05:19.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.562 EAL: request: mp_malloc_sync 00:05:19.562 EAL: No shared files mode enabled, IPC is disabled 00:05:19.562 EAL: Heap on socket 0 was shrunk by 34MB 00:05:19.562 EAL: Trying to obtain current memory policy. 00:05:19.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.562 EAL: Restoring previous memory policy: 4 00:05:19.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.562 EAL: request: mp_malloc_sync 00:05:19.562 EAL: No shared files mode enabled, IPC is disabled 00:05:19.562 EAL: Heap on socket 0 was expanded by 66MB 00:05:19.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.562 EAL: request: mp_malloc_sync 00:05:19.562 EAL: No shared files mode enabled, IPC is disabled 00:05:19.562 EAL: Heap on socket 0 was shrunk by 66MB 00:05:19.562 EAL: Trying to obtain current memory policy. 00:05:19.562 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.562 EAL: Restoring previous memory policy: 4 00:05:19.562 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.562 EAL: request: mp_malloc_sync 00:05:19.562 EAL: No shared files mode enabled, IPC is disabled 00:05:19.562 EAL: Heap on socket 0 was expanded by 130MB 00:05:19.823 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.823 EAL: request: mp_malloc_sync 00:05:19.823 EAL: No shared files mode enabled, IPC is disabled 00:05:19.823 EAL: Heap on socket 0 was shrunk by 130MB 00:05:19.823 EAL: Trying to obtain current memory policy. 00:05:19.823 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.823 EAL: Restoring previous memory policy: 4 00:05:19.823 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.823 EAL: request: mp_malloc_sync 00:05:19.823 EAL: No shared files mode enabled, IPC is disabled 00:05:19.823 EAL: Heap on socket 0 was expanded by 258MB 00:05:20.082 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.082 EAL: request: mp_malloc_sync 00:05:20.082 EAL: No shared files mode enabled, IPC is disabled 00:05:20.082 EAL: Heap on socket 0 was shrunk by 258MB 00:05:20.082 EAL: Trying to obtain current memory policy. 00:05:20.082 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.343 EAL: Restoring previous memory policy: 4 00:05:20.343 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.343 EAL: request: mp_malloc_sync 00:05:20.343 EAL: No shared files mode enabled, IPC is disabled 00:05:20.343 EAL: Heap on socket 0 was expanded by 514MB 00:05:20.603 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.603 EAL: request: mp_malloc_sync 00:05:20.603 EAL: No shared files mode enabled, IPC is disabled 00:05:20.603 EAL: Heap on socket 0 was shrunk by 514MB 00:05:20.864 EAL: Trying to obtain current memory policy. 00:05:20.864 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:20.864 EAL: Restoring previous memory policy: 4 00:05:20.864 EAL: Calling mem event callback 'spdk:(nil)' 00:05:20.864 EAL: request: mp_malloc_sync 00:05:20.864 EAL: No shared files mode enabled, IPC is disabled 00:05:20.864 EAL: Heap on socket 0 was expanded by 1026MB 00:05:21.807 EAL: Calling mem event callback 'spdk:(nil)' 00:05:21.807 EAL: request: mp_malloc_sync 00:05:21.807 EAL: No shared files mode enabled, IPC is disabled 00:05:21.807 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:22.067 passed 00:05:22.067 00:05:22.067 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.067 suites 1 1 n/a 0 0 00:05:22.067 tests 2 2 2 0 0 00:05:22.067 asserts 497 497 497 0 n/a 00:05:22.067 00:05:22.067 Elapsed time = 2.841 seconds 00:05:22.067 EAL: Calling mem event callback 'spdk:(nil)' 00:05:22.067 EAL: request: mp_malloc_sync 00:05:22.067 EAL: No shared files mode enabled, IPC is disabled 00:05:22.067 EAL: Heap on socket 0 was shrunk by 2MB 00:05:22.067 EAL: No shared files mode enabled, IPC is disabled 00:05:22.067 EAL: No shared files mode enabled, IPC is disabled 00:05:22.067 EAL: No shared files mode enabled, IPC is disabled 00:05:22.328 00:05:22.328 real 0m3.095s 00:05:22.328 user 0m2.409s 00:05:22.328 sys 0m0.633s 00:05:22.328 00:20:48 env.env_vtophys -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:22.328 00:20:48 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:22.328 ************************************ 00:05:22.328 END TEST env_vtophys 00:05:22.328 ************************************ 00:05:22.328 00:20:48 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:05:22.328 00:20:48 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:22.328 00:20:48 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:22.328 00:20:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.328 ************************************ 00:05:22.328 START TEST env_pci 00:05:22.328 ************************************ 00:05:22.328 00:20:48 env.env_pci -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:05:22.328 00:05:22.328 00:05:22.328 CUnit - A unit testing framework for C - Version 2.1-3 00:05:22.328 http://cunit.sourceforge.net/ 00:05:22.328 00:05:22.328 00:05:22.328 Suite: pci 00:05:22.328 Test: pci_hook ...[2024-05-15 00:20:48.341968] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1786099 has claimed it 00:05:22.328 EAL: Cannot find device (10000:00:01.0) 00:05:22.328 EAL: Failed to attach device on primary process 00:05:22.328 passed 00:05:22.328 00:05:22.328 Run Summary: Type Total Ran Passed Failed Inactive 00:05:22.328 suites 1 1 n/a 0 0 00:05:22.328 tests 1 1 1 0 0 00:05:22.328 asserts 25 25 25 0 n/a 00:05:22.328 00:05:22.328 Elapsed time = 0.058 seconds 00:05:22.328 00:05:22.328 real 0m0.120s 00:05:22.328 user 0m0.034s 00:05:22.328 sys 0m0.085s 00:05:22.328 00:20:48 env.env_pci -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:22.328 00:20:48 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:22.328 ************************************ 00:05:22.328 END TEST env_pci 00:05:22.328 ************************************ 00:05:22.328 00:20:48 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:22.328 00:20:48 env -- env/env.sh@15 -- # uname 00:05:22.328 00:20:48 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:22.328 00:20:48 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:22.328 00:20:48 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.328 00:20:48 env -- common/autotest_common.sh@1098 -- # '[' 5 -le 1 ']' 00:05:22.329 00:20:48 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:22.329 00:20:48 env -- common/autotest_common.sh@10 -- # set +x 00:05:22.589 ************************************ 00:05:22.589 START TEST env_dpdk_post_init 00:05:22.589 ************************************ 00:05:22.589 00:20:48 env.env_dpdk_post_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:22.589 EAL: Detected CPU lcores: 128 00:05:22.589 EAL: Detected NUMA nodes: 2 00:05:22.589 EAL: Detected shared linkage of DPDK 00:05:22.589 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:22.589 EAL: Selected IOVA mode 'VA' 00:05:22.589 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.589 EAL: VFIO support initialized 00:05:22.589 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:22.589 EAL: Using IOMMU type 1 (Type 1) 00:05:22.849 EAL: Ignore mapping IO port bar(1) 00:05:22.849 EAL: Ignore mapping IO port bar(3) 00:05:22.849 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6a:01.0 (socket 0) 00:05:23.110 EAL: Ignore mapping IO port bar(1) 00:05:23.110 EAL: Ignore mapping IO port bar(3) 00:05:23.110 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6a:02.0 (socket 0) 00:05:23.371 EAL: Ignore mapping IO port bar(1) 00:05:23.371 EAL: Ignore mapping IO port bar(3) 00:05:23.371 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6f:01.0 (socket 0) 00:05:23.371 EAL: Ignore mapping IO port bar(1) 00:05:23.371 EAL: Ignore mapping IO port bar(3) 00:05:23.631 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6f:02.0 (socket 0) 00:05:23.631 EAL: Ignore mapping IO port bar(1) 00:05:23.631 EAL: Ignore mapping IO port bar(3) 00:05:23.892 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:74:01.0 (socket 0) 00:05:23.892 EAL: Ignore mapping IO port bar(1) 00:05:23.892 EAL: Ignore mapping IO port bar(3) 00:05:24.153 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:74:02.0 (socket 0) 00:05:24.153 EAL: Ignore mapping IO port bar(1) 00:05:24.153 EAL: Ignore mapping IO port bar(3) 00:05:24.153 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:79:01.0 (socket 0) 00:05:24.413 EAL: Ignore mapping IO port bar(1) 00:05:24.413 EAL: Ignore mapping IO port bar(3) 00:05:24.413 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:79:02.0 (socket 0) 00:05:25.353 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:c9:00.0 (socket 1) 00:05:25.921 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:ca:00.0 (socket 1) 00:05:26.181 EAL: Ignore mapping IO port bar(1) 00:05:26.181 EAL: Ignore mapping IO port bar(3) 00:05:26.181 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:e7:01.0 (socket 1) 00:05:26.181 EAL: Ignore mapping IO port bar(1) 00:05:26.181 EAL: Ignore mapping IO port bar(3) 00:05:26.442 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:e7:02.0 (socket 1) 00:05:26.442 EAL: Ignore mapping IO port bar(1) 00:05:26.442 EAL: Ignore mapping IO port bar(3) 00:05:26.703 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:ec:01.0 (socket 1) 00:05:26.703 EAL: Ignore mapping IO port bar(1) 00:05:26.703 EAL: Ignore mapping IO port bar(3) 00:05:26.964 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:ec:02.0 (socket 1) 00:05:26.964 EAL: Ignore mapping IO port bar(1) 00:05:26.964 EAL: Ignore mapping IO port bar(3) 00:05:26.964 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f1:01.0 (socket 1) 00:05:27.224 EAL: Ignore mapping IO port bar(1) 00:05:27.224 EAL: Ignore mapping IO port bar(3) 00:05:27.224 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f1:02.0 (socket 1) 00:05:27.489 EAL: Ignore mapping IO port bar(1) 00:05:27.489 EAL: Ignore mapping IO port bar(3) 00:05:27.489 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f6:01.0 (socket 1) 00:05:27.748 EAL: Ignore mapping IO port bar(1) 00:05:27.748 EAL: Ignore mapping IO port bar(3) 00:05:27.748 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f6:02.0 (socket 1) 00:05:31.947 EAL: Releasing PCI mapped resource for 0000:c9:00.0 00:05:31.948 EAL: Calling pci_unmap_resource for 0000:c9:00.0 at 0x202001180000 00:05:31.948 EAL: Releasing PCI mapped resource for 0000:ca:00.0 00:05:31.948 EAL: Calling pci_unmap_resource for 0000:ca:00.0 at 0x202001184000 00:05:32.519 Starting DPDK initialization... 00:05:32.519 Starting SPDK post initialization... 00:05:32.519 SPDK NVMe probe 00:05:32.519 Attaching to 0000:c9:00.0 00:05:32.519 Attaching to 0000:ca:00.0 00:05:32.519 Attached to 0000:c9:00.0 00:05:32.519 Attached to 0000:ca:00.0 00:05:32.519 Cleaning up... 00:05:34.431 00:05:34.431 real 0m11.573s 00:05:34.431 user 0m4.635s 00:05:34.431 sys 0m0.223s 00:05:34.431 00:21:00 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:34.431 00:21:00 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.431 ************************************ 00:05:34.431 END TEST env_dpdk_post_init 00:05:34.431 ************************************ 00:05:34.431 00:21:00 env -- env/env.sh@26 -- # uname 00:05:34.431 00:21:00 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:34.431 00:21:00 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.431 00:21:00 env -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:34.431 00:21:00 env -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:34.431 00:21:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.431 ************************************ 00:05:34.431 START TEST env_mem_callbacks 00:05:34.431 ************************************ 00:05:34.431 00:21:00 env.env_mem_callbacks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.431 EAL: Detected CPU lcores: 128 00:05:34.431 EAL: Detected NUMA nodes: 2 00:05:34.431 EAL: Detected shared linkage of DPDK 00:05:34.431 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:34.431 EAL: Selected IOVA mode 'VA' 00:05:34.431 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.431 EAL: VFIO support initialized 00:05:34.431 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.431 00:05:34.431 00:05:34.431 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.431 http://cunit.sourceforge.net/ 00:05:34.431 00:05:34.431 00:05:34.431 Suite: memory 00:05:34.431 Test: test ... 00:05:34.431 register 0x200000200000 2097152 00:05:34.431 malloc 3145728 00:05:34.431 register 0x200000400000 4194304 00:05:34.431 buf 0x2000004fffc0 len 3145728 PASSED 00:05:34.431 malloc 64 00:05:34.431 buf 0x2000004ffec0 len 64 PASSED 00:05:34.431 malloc 4194304 00:05:34.431 register 0x200000800000 6291456 00:05:34.431 buf 0x2000009fffc0 len 4194304 PASSED 00:05:34.431 free 0x2000004fffc0 3145728 00:05:34.431 free 0x2000004ffec0 64 00:05:34.431 unregister 0x200000400000 4194304 PASSED 00:05:34.431 free 0x2000009fffc0 4194304 00:05:34.431 unregister 0x200000800000 6291456 PASSED 00:05:34.431 malloc 8388608 00:05:34.431 register 0x200000400000 10485760 00:05:34.431 buf 0x2000005fffc0 len 8388608 PASSED 00:05:34.431 free 0x2000005fffc0 8388608 00:05:34.431 unregister 0x200000400000 10485760 PASSED 00:05:34.431 passed 00:05:34.431 00:05:34.431 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.431 suites 1 1 n/a 0 0 00:05:34.431 tests 1 1 1 0 0 00:05:34.431 asserts 15 15 15 0 n/a 00:05:34.431 00:05:34.431 Elapsed time = 0.023 seconds 00:05:34.431 00:05:34.431 real 0m0.138s 00:05:34.431 user 0m0.049s 00:05:34.431 sys 0m0.087s 00:05:34.431 00:21:00 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:34.431 00:21:00 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:34.431 ************************************ 00:05:34.431 END TEST env_mem_callbacks 00:05:34.431 ************************************ 00:05:34.431 00:05:34.431 real 0m15.670s 00:05:34.431 user 0m7.562s 00:05:34.431 sys 0m1.355s 00:05:34.431 00:21:00 env -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:34.431 00:21:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.431 ************************************ 00:05:34.431 END TEST env 00:05:34.431 ************************************ 00:05:34.431 00:21:00 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:05:34.431 00:21:00 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:34.431 00:21:00 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:34.431 00:21:00 -- common/autotest_common.sh@10 -- # set +x 00:05:34.431 ************************************ 00:05:34.431 START TEST rpc 00:05:34.431 ************************************ 00:05:34.431 00:21:00 rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:05:34.431 * Looking for test storage... 00:05:34.431 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:05:34.431 00:21:00 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1788638 00:05:34.431 00:21:00 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.431 00:21:00 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1788638 00:05:34.431 00:21:00 rpc -- common/autotest_common.sh@828 -- # '[' -z 1788638 ']' 00:05:34.431 00:21:00 rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.431 00:21:00 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:34.431 00:21:00 rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:34.431 00:21:00 rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.431 00:21:00 rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:34.431 00:21:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.431 [2024-05-15 00:21:00.564662] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:34.431 [2024-05-15 00:21:00.564800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1788638 ] 00:05:34.692 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.692 [2024-05-15 00:21:00.698188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.692 [2024-05-15 00:21:00.798217] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:34.692 [2024-05-15 00:21:00.798265] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1788638' to capture a snapshot of events at runtime. 00:05:34.692 [2024-05-15 00:21:00.798277] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.692 [2024-05-15 00:21:00.798287] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.692 [2024-05-15 00:21:00.798297] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1788638 for offline analysis/debug. 00:05:34.692 [2024-05-15 00:21:00.798335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.262 00:21:01 rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:35.262 00:21:01 rpc -- common/autotest_common.sh@861 -- # return 0 00:05:35.262 00:21:01 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:05:35.262 00:21:01 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:05:35.262 00:21:01 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:35.262 00:21:01 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:35.262 00:21:01 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:35.262 00:21:01 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:35.262 00:21:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.262 ************************************ 00:05:35.262 START TEST rpc_integrity 00:05:35.262 ************************************ 00:05:35.262 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:05:35.262 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:35.263 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.263 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.263 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.263 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:35.263 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:35.263 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:35.263 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:35.263 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.263 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.263 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.263 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:35.263 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:35.263 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.263 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.263 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.263 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:35.263 { 00:05:35.263 "name": "Malloc0", 00:05:35.263 "aliases": [ 00:05:35.263 "0bc080dd-4518-414c-99d5-8b17fdd56132" 00:05:35.263 ], 00:05:35.263 "product_name": "Malloc disk", 00:05:35.263 "block_size": 512, 00:05:35.263 "num_blocks": 16384, 00:05:35.263 "uuid": "0bc080dd-4518-414c-99d5-8b17fdd56132", 00:05:35.263 "assigned_rate_limits": { 00:05:35.263 "rw_ios_per_sec": 0, 00:05:35.263 "rw_mbytes_per_sec": 0, 00:05:35.263 "r_mbytes_per_sec": 0, 00:05:35.263 "w_mbytes_per_sec": 0 00:05:35.263 }, 00:05:35.263 "claimed": false, 00:05:35.263 "zoned": false, 00:05:35.263 "supported_io_types": { 00:05:35.263 "read": true, 00:05:35.263 "write": true, 00:05:35.263 "unmap": true, 00:05:35.263 "write_zeroes": true, 00:05:35.263 "flush": true, 00:05:35.263 "reset": true, 00:05:35.263 "compare": false, 00:05:35.263 "compare_and_write": false, 00:05:35.263 "abort": true, 00:05:35.263 "nvme_admin": false, 00:05:35.263 "nvme_io": false 00:05:35.263 }, 00:05:35.263 "memory_domains": [ 00:05:35.263 { 00:05:35.263 "dma_device_id": "system", 00:05:35.263 "dma_device_type": 1 00:05:35.263 }, 00:05:35.263 { 00:05:35.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.263 "dma_device_type": 2 00:05:35.263 } 00:05:35.263 ], 00:05:35.263 "driver_specific": {} 00:05:35.263 } 00:05:35.263 ]' 00:05:35.263 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:35.524 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:35.524 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.524 [2024-05-15 00:21:01.438912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:35.524 [2024-05-15 00:21:01.438965] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.524 [2024-05-15 00:21:01.438994] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600001fb80 00:05:35.524 [2024-05-15 00:21:01.439005] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.524 [2024-05-15 00:21:01.440886] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.524 [2024-05-15 00:21:01.440916] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:35.524 Passthru0 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.524 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.524 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.524 { 00:05:35.524 "name": "Malloc0", 00:05:35.524 "aliases": [ 00:05:35.524 "0bc080dd-4518-414c-99d5-8b17fdd56132" 00:05:35.524 ], 00:05:35.524 "product_name": "Malloc disk", 00:05:35.524 "block_size": 512, 00:05:35.524 "num_blocks": 16384, 00:05:35.524 "uuid": "0bc080dd-4518-414c-99d5-8b17fdd56132", 00:05:35.524 "assigned_rate_limits": { 00:05:35.524 "rw_ios_per_sec": 0, 00:05:35.524 "rw_mbytes_per_sec": 0, 00:05:35.524 "r_mbytes_per_sec": 0, 00:05:35.524 "w_mbytes_per_sec": 0 00:05:35.524 }, 00:05:35.524 "claimed": true, 00:05:35.524 "claim_type": "exclusive_write", 00:05:35.524 "zoned": false, 00:05:35.524 "supported_io_types": { 00:05:35.524 "read": true, 00:05:35.524 "write": true, 00:05:35.524 "unmap": true, 00:05:35.524 "write_zeroes": true, 00:05:35.524 "flush": true, 00:05:35.524 "reset": true, 00:05:35.524 "compare": false, 00:05:35.524 "compare_and_write": false, 00:05:35.524 "abort": true, 00:05:35.524 "nvme_admin": false, 00:05:35.524 "nvme_io": false 00:05:35.524 }, 00:05:35.524 "memory_domains": [ 00:05:35.524 { 00:05:35.524 "dma_device_id": "system", 00:05:35.524 "dma_device_type": 1 00:05:35.524 }, 00:05:35.524 { 00:05:35.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.524 "dma_device_type": 2 00:05:35.524 } 00:05:35.524 ], 00:05:35.524 "driver_specific": {} 00:05:35.524 }, 00:05:35.524 { 00:05:35.524 "name": "Passthru0", 00:05:35.524 "aliases": [ 00:05:35.524 "2e0e0757-631b-5619-8295-19eda6fd15e5" 00:05:35.524 ], 00:05:35.524 "product_name": "passthru", 00:05:35.524 "block_size": 512, 00:05:35.524 "num_blocks": 16384, 00:05:35.524 "uuid": "2e0e0757-631b-5619-8295-19eda6fd15e5", 00:05:35.524 "assigned_rate_limits": { 00:05:35.524 "rw_ios_per_sec": 0, 00:05:35.524 "rw_mbytes_per_sec": 0, 00:05:35.524 "r_mbytes_per_sec": 0, 00:05:35.524 "w_mbytes_per_sec": 0 00:05:35.524 }, 00:05:35.524 "claimed": false, 00:05:35.524 "zoned": false, 00:05:35.524 "supported_io_types": { 00:05:35.524 "read": true, 00:05:35.524 "write": true, 00:05:35.524 "unmap": true, 00:05:35.524 "write_zeroes": true, 00:05:35.524 "flush": true, 00:05:35.524 "reset": true, 00:05:35.524 "compare": false, 00:05:35.524 "compare_and_write": false, 00:05:35.524 "abort": true, 00:05:35.524 "nvme_admin": false, 00:05:35.524 "nvme_io": false 00:05:35.524 }, 00:05:35.524 "memory_domains": [ 00:05:35.524 { 00:05:35.524 "dma_device_id": "system", 00:05:35.524 "dma_device_type": 1 00:05:35.524 }, 00:05:35.524 { 00:05:35.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.524 "dma_device_type": 2 00:05:35.524 } 00:05:35.524 ], 00:05:35.524 "driver_specific": { 00:05:35.524 "passthru": { 00:05:35.524 "name": "Passthru0", 00:05:35.524 "base_bdev_name": "Malloc0" 00:05:35.524 } 00:05:35.524 } 00:05:35.524 } 00:05:35.524 ]' 00:05:35.524 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:35.524 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.524 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.524 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.524 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.524 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:35.524 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:35.524 00:21:01 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:35.524 00:05:35.524 real 0m0.244s 00:05:35.524 user 0m0.130s 00:05:35.524 sys 0m0.037s 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:35.524 00:21:01 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.524 ************************************ 00:05:35.524 END TEST rpc_integrity 00:05:35.524 ************************************ 00:05:35.524 00:21:01 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:35.524 00:21:01 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:35.524 00:21:01 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:35.524 00:21:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.524 ************************************ 00:05:35.524 START TEST rpc_plugins 00:05:35.524 ************************************ 00:05:35.524 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@1122 -- # rpc_plugins 00:05:35.524 00:21:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:35.524 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.524 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.524 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.525 00:21:01 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:35.525 00:21:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:35.525 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.525 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.525 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.525 00:21:01 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:35.525 { 00:05:35.525 "name": "Malloc1", 00:05:35.525 "aliases": [ 00:05:35.525 "c4bb8514-7be2-4389-8da8-e6a81c587efc" 00:05:35.525 ], 00:05:35.525 "product_name": "Malloc disk", 00:05:35.525 "block_size": 4096, 00:05:35.525 "num_blocks": 256, 00:05:35.525 "uuid": "c4bb8514-7be2-4389-8da8-e6a81c587efc", 00:05:35.525 "assigned_rate_limits": { 00:05:35.525 "rw_ios_per_sec": 0, 00:05:35.525 "rw_mbytes_per_sec": 0, 00:05:35.525 "r_mbytes_per_sec": 0, 00:05:35.525 "w_mbytes_per_sec": 0 00:05:35.525 }, 00:05:35.525 "claimed": false, 00:05:35.525 "zoned": false, 00:05:35.525 "supported_io_types": { 00:05:35.525 "read": true, 00:05:35.525 "write": true, 00:05:35.525 "unmap": true, 00:05:35.525 "write_zeroes": true, 00:05:35.525 "flush": true, 00:05:35.525 "reset": true, 00:05:35.525 "compare": false, 00:05:35.525 "compare_and_write": false, 00:05:35.525 "abort": true, 00:05:35.525 "nvme_admin": false, 00:05:35.525 "nvme_io": false 00:05:35.525 }, 00:05:35.525 "memory_domains": [ 00:05:35.525 { 00:05:35.525 "dma_device_id": "system", 00:05:35.525 "dma_device_type": 1 00:05:35.525 }, 00:05:35.525 { 00:05:35.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.525 "dma_device_type": 2 00:05:35.525 } 00:05:35.525 ], 00:05:35.525 "driver_specific": {} 00:05:35.525 } 00:05:35.525 ]' 00:05:35.525 00:21:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:35.525 00:21:01 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:35.525 00:21:01 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:35.525 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.525 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.786 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.786 00:21:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:35.786 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.786 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.786 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.786 00:21:01 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:35.786 00:21:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:35.786 00:21:01 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:35.786 00:05:35.786 real 0m0.120s 00:05:35.786 user 0m0.070s 00:05:35.786 sys 0m0.014s 00:05:35.786 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:35.786 00:21:01 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.786 ************************************ 00:05:35.786 END TEST rpc_plugins 00:05:35.786 ************************************ 00:05:35.786 00:21:01 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:35.786 00:21:01 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:35.786 00:21:01 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:35.786 00:21:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.786 ************************************ 00:05:35.786 START TEST rpc_trace_cmd_test 00:05:35.786 ************************************ 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1122 -- # rpc_trace_cmd_test 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:35.786 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1788638", 00:05:35.786 "tpoint_group_mask": "0x8", 00:05:35.786 "iscsi_conn": { 00:05:35.786 "mask": "0x2", 00:05:35.786 "tpoint_mask": "0x0" 00:05:35.786 }, 00:05:35.786 "scsi": { 00:05:35.786 "mask": "0x4", 00:05:35.786 "tpoint_mask": "0x0" 00:05:35.786 }, 00:05:35.786 "bdev": { 00:05:35.786 "mask": "0x8", 00:05:35.786 "tpoint_mask": "0xffffffffffffffff" 00:05:35.786 }, 00:05:35.786 "nvmf_rdma": { 00:05:35.786 "mask": "0x10", 00:05:35.786 "tpoint_mask": "0x0" 00:05:35.786 }, 00:05:35.786 "nvmf_tcp": { 00:05:35.786 "mask": "0x20", 00:05:35.786 "tpoint_mask": "0x0" 00:05:35.786 }, 00:05:35.786 "ftl": { 00:05:35.786 "mask": "0x40", 00:05:35.786 "tpoint_mask": "0x0" 00:05:35.786 }, 00:05:35.786 "blobfs": { 00:05:35.786 "mask": "0x80", 00:05:35.786 "tpoint_mask": "0x0" 00:05:35.786 }, 00:05:35.786 "dsa": { 00:05:35.786 "mask": "0x200", 00:05:35.786 "tpoint_mask": "0x0" 00:05:35.786 }, 00:05:35.786 "thread": { 00:05:35.786 "mask": "0x400", 00:05:35.786 "tpoint_mask": "0x0" 00:05:35.786 }, 00:05:35.786 "nvme_pcie": { 00:05:35.786 "mask": "0x800", 00:05:35.786 "tpoint_mask": "0x0" 00:05:35.786 }, 00:05:35.786 "iaa": { 00:05:35.786 "mask": "0x1000", 00:05:35.786 "tpoint_mask": "0x0" 00:05:35.786 }, 00:05:35.786 "nvme_tcp": { 00:05:35.786 "mask": "0x2000", 00:05:35.786 "tpoint_mask": "0x0" 00:05:35.786 }, 00:05:35.786 "bdev_nvme": { 00:05:35.786 "mask": "0x4000", 00:05:35.786 "tpoint_mask": "0x0" 00:05:35.786 }, 00:05:35.786 "sock": { 00:05:35.786 "mask": "0x8000", 00:05:35.786 "tpoint_mask": "0x0" 00:05:35.786 } 00:05:35.786 }' 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:35.786 00:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:36.048 00:21:01 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:36.048 00:05:36.048 real 0m0.179s 00:05:36.048 user 0m0.145s 00:05:36.048 sys 0m0.027s 00:05:36.048 00:21:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:36.048 00:21:01 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:36.048 ************************************ 00:05:36.048 END TEST rpc_trace_cmd_test 00:05:36.048 ************************************ 00:05:36.048 00:21:01 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:36.048 00:21:01 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:36.048 00:21:01 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:36.048 00:21:01 rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:36.048 00:21:01 rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:36.048 00:21:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.048 ************************************ 00:05:36.048 START TEST rpc_daemon_integrity 00:05:36.048 ************************************ 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1122 -- # rpc_integrity 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:36.048 { 00:05:36.048 "name": "Malloc2", 00:05:36.048 "aliases": [ 00:05:36.048 "e4303fe3-ad6f-4815-8940-2255485399f1" 00:05:36.048 ], 00:05:36.048 "product_name": "Malloc disk", 00:05:36.048 "block_size": 512, 00:05:36.048 "num_blocks": 16384, 00:05:36.048 "uuid": "e4303fe3-ad6f-4815-8940-2255485399f1", 00:05:36.048 "assigned_rate_limits": { 00:05:36.048 "rw_ios_per_sec": 0, 00:05:36.048 "rw_mbytes_per_sec": 0, 00:05:36.048 "r_mbytes_per_sec": 0, 00:05:36.048 "w_mbytes_per_sec": 0 00:05:36.048 }, 00:05:36.048 "claimed": false, 00:05:36.048 "zoned": false, 00:05:36.048 "supported_io_types": { 00:05:36.048 "read": true, 00:05:36.048 "write": true, 00:05:36.048 "unmap": true, 00:05:36.048 "write_zeroes": true, 00:05:36.048 "flush": true, 00:05:36.048 "reset": true, 00:05:36.048 "compare": false, 00:05:36.048 "compare_and_write": false, 00:05:36.048 "abort": true, 00:05:36.048 "nvme_admin": false, 00:05:36.048 "nvme_io": false 00:05:36.048 }, 00:05:36.048 "memory_domains": [ 00:05:36.048 { 00:05:36.048 "dma_device_id": "system", 00:05:36.048 "dma_device_type": 1 00:05:36.048 }, 00:05:36.048 { 00:05:36.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.048 "dma_device_type": 2 00:05:36.048 } 00:05:36.048 ], 00:05:36.048 "driver_specific": {} 00:05:36.048 } 00:05:36.048 ]' 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.048 [2024-05-15 00:21:02.138034] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:36.048 [2024-05-15 00:21:02.138082] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:36.048 [2024-05-15 00:21:02.138105] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000020d80 00:05:36.048 [2024-05-15 00:21:02.138114] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:36.048 [2024-05-15 00:21:02.139740] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:36.048 [2024-05-15 00:21:02.139766] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:36.048 Passthru0 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:36.048 { 00:05:36.048 "name": "Malloc2", 00:05:36.048 "aliases": [ 00:05:36.048 "e4303fe3-ad6f-4815-8940-2255485399f1" 00:05:36.048 ], 00:05:36.048 "product_name": "Malloc disk", 00:05:36.048 "block_size": 512, 00:05:36.048 "num_blocks": 16384, 00:05:36.048 "uuid": "e4303fe3-ad6f-4815-8940-2255485399f1", 00:05:36.048 "assigned_rate_limits": { 00:05:36.048 "rw_ios_per_sec": 0, 00:05:36.048 "rw_mbytes_per_sec": 0, 00:05:36.048 "r_mbytes_per_sec": 0, 00:05:36.048 "w_mbytes_per_sec": 0 00:05:36.048 }, 00:05:36.048 "claimed": true, 00:05:36.048 "claim_type": "exclusive_write", 00:05:36.048 "zoned": false, 00:05:36.048 "supported_io_types": { 00:05:36.048 "read": true, 00:05:36.048 "write": true, 00:05:36.048 "unmap": true, 00:05:36.048 "write_zeroes": true, 00:05:36.048 "flush": true, 00:05:36.048 "reset": true, 00:05:36.048 "compare": false, 00:05:36.048 "compare_and_write": false, 00:05:36.048 "abort": true, 00:05:36.048 "nvme_admin": false, 00:05:36.048 "nvme_io": false 00:05:36.048 }, 00:05:36.048 "memory_domains": [ 00:05:36.048 { 00:05:36.048 "dma_device_id": "system", 00:05:36.048 "dma_device_type": 1 00:05:36.048 }, 00:05:36.048 { 00:05:36.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.048 "dma_device_type": 2 00:05:36.048 } 00:05:36.048 ], 00:05:36.048 "driver_specific": {} 00:05:36.048 }, 00:05:36.048 { 00:05:36.048 "name": "Passthru0", 00:05:36.048 "aliases": [ 00:05:36.048 "444e5ffb-2c73-54b0-aad3-8a73ff10a234" 00:05:36.048 ], 00:05:36.048 "product_name": "passthru", 00:05:36.048 "block_size": 512, 00:05:36.048 "num_blocks": 16384, 00:05:36.048 "uuid": "444e5ffb-2c73-54b0-aad3-8a73ff10a234", 00:05:36.048 "assigned_rate_limits": { 00:05:36.048 "rw_ios_per_sec": 0, 00:05:36.048 "rw_mbytes_per_sec": 0, 00:05:36.048 "r_mbytes_per_sec": 0, 00:05:36.048 "w_mbytes_per_sec": 0 00:05:36.048 }, 00:05:36.048 "claimed": false, 00:05:36.048 "zoned": false, 00:05:36.048 "supported_io_types": { 00:05:36.048 "read": true, 00:05:36.048 "write": true, 00:05:36.048 "unmap": true, 00:05:36.048 "write_zeroes": true, 00:05:36.048 "flush": true, 00:05:36.048 "reset": true, 00:05:36.048 "compare": false, 00:05:36.048 "compare_and_write": false, 00:05:36.048 "abort": true, 00:05:36.048 "nvme_admin": false, 00:05:36.048 "nvme_io": false 00:05:36.048 }, 00:05:36.048 "memory_domains": [ 00:05:36.048 { 00:05:36.048 "dma_device_id": "system", 00:05:36.048 "dma_device_type": 1 00:05:36.048 }, 00:05:36.048 { 00:05:36.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.048 "dma_device_type": 2 00:05:36.048 } 00:05:36.048 ], 00:05:36.048 "driver_specific": { 00:05:36.048 "passthru": { 00:05:36.048 "name": "Passthru0", 00:05:36.048 "base_bdev_name": "Malloc2" 00:05:36.048 } 00:05:36.048 } 00:05:36.048 } 00:05:36.048 ]' 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.048 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:36.310 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:36.310 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:36.310 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.310 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:36.310 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:36.310 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:36.310 00:21:02 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:36.310 00:05:36.310 real 0m0.227s 00:05:36.310 user 0m0.137s 00:05:36.310 sys 0m0.026s 00:05:36.310 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:36.310 00:21:02 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.310 ************************************ 00:05:36.310 END TEST rpc_daemon_integrity 00:05:36.310 ************************************ 00:05:36.310 00:21:02 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:36.310 00:21:02 rpc -- rpc/rpc.sh@84 -- # killprocess 1788638 00:05:36.310 00:21:02 rpc -- common/autotest_common.sh@947 -- # '[' -z 1788638 ']' 00:05:36.310 00:21:02 rpc -- common/autotest_common.sh@951 -- # kill -0 1788638 00:05:36.310 00:21:02 rpc -- common/autotest_common.sh@952 -- # uname 00:05:36.310 00:21:02 rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:36.310 00:21:02 rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1788638 00:05:36.310 00:21:02 rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:36.310 00:21:02 rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:36.310 00:21:02 rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1788638' 00:05:36.310 killing process with pid 1788638 00:05:36.310 00:21:02 rpc -- common/autotest_common.sh@966 -- # kill 1788638 00:05:36.310 00:21:02 rpc -- common/autotest_common.sh@971 -- # wait 1788638 00:05:37.251 00:05:37.251 real 0m2.799s 00:05:37.251 user 0m3.165s 00:05:37.251 sys 0m0.783s 00:05:37.251 00:21:03 rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:37.251 00:21:03 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.251 ************************************ 00:05:37.251 END TEST rpc 00:05:37.251 ************************************ 00:05:37.251 00:21:03 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:37.251 00:21:03 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:37.251 00:21:03 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:37.251 00:21:03 -- common/autotest_common.sh@10 -- # set +x 00:05:37.251 ************************************ 00:05:37.251 START TEST skip_rpc 00:05:37.251 ************************************ 00:05:37.251 00:21:03 skip_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:37.251 * Looking for test storage... 00:05:37.251 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:05:37.251 00:21:03 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:05:37.251 00:21:03 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:05:37.251 00:21:03 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:37.251 00:21:03 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:37.251 00:21:03 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:37.251 00:21:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.251 ************************************ 00:05:37.251 START TEST skip_rpc 00:05:37.251 ************************************ 00:05:37.251 00:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1122 -- # test_skip_rpc 00:05:37.251 00:21:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1789492 00:05:37.251 00:21:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:37.251 00:21:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:37.251 00:21:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:37.511 [2024-05-15 00:21:03.461137] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:37.511 [2024-05-15 00:21:03.461261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1789492 ] 00:05:37.511 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.511 [2024-05-15 00:21:03.591572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.770 [2024-05-15 00:21:03.689785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@649 -- # local es=0 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # rpc_cmd spdk_get_version 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # es=1 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1789492 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@947 -- # '[' -z 1789492 ']' 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@951 -- # kill -0 1789492 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # uname 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1789492 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1789492' 00:05:43.053 killing process with pid 1789492 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # kill 1789492 00:05:43.053 00:21:08 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # wait 1789492 00:05:43.314 00:05:43.314 real 0m5.894s 00:05:43.314 user 0m5.531s 00:05:43.314 sys 0m0.357s 00:05:43.314 00:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:43.314 00:21:09 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.314 ************************************ 00:05:43.314 END TEST skip_rpc 00:05:43.314 ************************************ 00:05:43.314 00:21:09 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:43.314 00:21:09 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:43.314 00:21:09 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:43.314 00:21:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.314 ************************************ 00:05:43.314 START TEST skip_rpc_with_json 00:05:43.314 ************************************ 00:05:43.314 00:21:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_json 00:05:43.314 00:21:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:43.314 00:21:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1791095 00:05:43.314 00:21:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:43.314 00:21:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1791095 00:05:43.314 00:21:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@828 -- # '[' -z 1791095 ']' 00:05:43.314 00:21:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.314 00:21:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:43.314 00:21:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.314 00:21:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:43.314 00:21:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:43.314 00:21:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.314 [2024-05-15 00:21:09.429519] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:43.314 [2024-05-15 00:21:09.429653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1791095 ] 00:05:43.575 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.575 [2024-05-15 00:21:09.558646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.575 [2024-05-15 00:21:09.655496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@861 -- # return 0 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.150 [2024-05-15 00:21:10.142330] nvmf_rpc.c:2547:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:44.150 request: 00:05:44.150 { 00:05:44.150 "trtype": "tcp", 00:05:44.150 "method": "nvmf_get_transports", 00:05:44.150 "req_id": 1 00:05:44.150 } 00:05:44.150 Got JSON-RPC error response 00:05:44.150 response: 00:05:44.150 { 00:05:44.150 "code": -19, 00:05:44.150 "message": "No such device" 00:05:44.150 } 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.150 [2024-05-15 00:21:10.154411] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@560 -- # xtrace_disable 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:05:44.150 00:21:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:05:44.150 { 00:05:44.150 "subsystems": [ 00:05:44.150 { 00:05:44.150 "subsystem": "keyring", 00:05:44.150 "config": [] 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "subsystem": "iobuf", 00:05:44.150 "config": [ 00:05:44.150 { 00:05:44.150 "method": "iobuf_set_options", 00:05:44.150 "params": { 00:05:44.150 "small_pool_count": 8192, 00:05:44.150 "large_pool_count": 1024, 00:05:44.150 "small_bufsize": 8192, 00:05:44.150 "large_bufsize": 135168 00:05:44.150 } 00:05:44.150 } 00:05:44.150 ] 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "subsystem": "sock", 00:05:44.150 "config": [ 00:05:44.150 { 00:05:44.150 "method": "sock_impl_set_options", 00:05:44.150 "params": { 00:05:44.150 "impl_name": "posix", 00:05:44.150 "recv_buf_size": 2097152, 00:05:44.150 "send_buf_size": 2097152, 00:05:44.150 "enable_recv_pipe": true, 00:05:44.150 "enable_quickack": false, 00:05:44.150 "enable_placement_id": 0, 00:05:44.150 "enable_zerocopy_send_server": true, 00:05:44.150 "enable_zerocopy_send_client": false, 00:05:44.150 "zerocopy_threshold": 0, 00:05:44.150 "tls_version": 0, 00:05:44.150 "enable_ktls": false 00:05:44.150 } 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "method": "sock_impl_set_options", 00:05:44.150 "params": { 00:05:44.150 "impl_name": "ssl", 00:05:44.150 "recv_buf_size": 4096, 00:05:44.150 "send_buf_size": 4096, 00:05:44.150 "enable_recv_pipe": true, 00:05:44.150 "enable_quickack": false, 00:05:44.150 "enable_placement_id": 0, 00:05:44.150 "enable_zerocopy_send_server": true, 00:05:44.150 "enable_zerocopy_send_client": false, 00:05:44.150 "zerocopy_threshold": 0, 00:05:44.150 "tls_version": 0, 00:05:44.150 "enable_ktls": false 00:05:44.150 } 00:05:44.150 } 00:05:44.150 ] 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "subsystem": "vmd", 00:05:44.150 "config": [] 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "subsystem": "accel", 00:05:44.150 "config": [ 00:05:44.150 { 00:05:44.150 "method": "accel_set_options", 00:05:44.150 "params": { 00:05:44.150 "small_cache_size": 128, 00:05:44.150 "large_cache_size": 16, 00:05:44.150 "task_count": 2048, 00:05:44.150 "sequence_count": 2048, 00:05:44.150 "buf_count": 2048 00:05:44.150 } 00:05:44.150 } 00:05:44.150 ] 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "subsystem": "bdev", 00:05:44.150 "config": [ 00:05:44.150 { 00:05:44.150 "method": "bdev_set_options", 00:05:44.150 "params": { 00:05:44.150 "bdev_io_pool_size": 65535, 00:05:44.150 "bdev_io_cache_size": 256, 00:05:44.150 "bdev_auto_examine": true, 00:05:44.150 "iobuf_small_cache_size": 128, 00:05:44.150 "iobuf_large_cache_size": 16 00:05:44.150 } 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "method": "bdev_raid_set_options", 00:05:44.150 "params": { 00:05:44.150 "process_window_size_kb": 1024 00:05:44.150 } 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "method": "bdev_iscsi_set_options", 00:05:44.150 "params": { 00:05:44.150 "timeout_sec": 30 00:05:44.150 } 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "method": "bdev_nvme_set_options", 00:05:44.150 "params": { 00:05:44.150 "action_on_timeout": "none", 00:05:44.150 "timeout_us": 0, 00:05:44.150 "timeout_admin_us": 0, 00:05:44.150 "keep_alive_timeout_ms": 10000, 00:05:44.150 "arbitration_burst": 0, 00:05:44.150 "low_priority_weight": 0, 00:05:44.150 "medium_priority_weight": 0, 00:05:44.150 "high_priority_weight": 0, 00:05:44.150 "nvme_adminq_poll_period_us": 10000, 00:05:44.150 "nvme_ioq_poll_period_us": 0, 00:05:44.150 "io_queue_requests": 0, 00:05:44.150 "delay_cmd_submit": true, 00:05:44.150 "transport_retry_count": 4, 00:05:44.150 "bdev_retry_count": 3, 00:05:44.150 "transport_ack_timeout": 0, 00:05:44.150 "ctrlr_loss_timeout_sec": 0, 00:05:44.150 "reconnect_delay_sec": 0, 00:05:44.150 "fast_io_fail_timeout_sec": 0, 00:05:44.150 "disable_auto_failback": false, 00:05:44.150 "generate_uuids": false, 00:05:44.150 "transport_tos": 0, 00:05:44.150 "nvme_error_stat": false, 00:05:44.150 "rdma_srq_size": 0, 00:05:44.150 "io_path_stat": false, 00:05:44.150 "allow_accel_sequence": false, 00:05:44.150 "rdma_max_cq_size": 0, 00:05:44.150 "rdma_cm_event_timeout_ms": 0, 00:05:44.150 "dhchap_digests": [ 00:05:44.150 "sha256", 00:05:44.150 "sha384", 00:05:44.150 "sha512" 00:05:44.150 ], 00:05:44.150 "dhchap_dhgroups": [ 00:05:44.150 "null", 00:05:44.150 "ffdhe2048", 00:05:44.150 "ffdhe3072", 00:05:44.150 "ffdhe4096", 00:05:44.150 "ffdhe6144", 00:05:44.150 "ffdhe8192" 00:05:44.150 ] 00:05:44.150 } 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "method": "bdev_nvme_set_hotplug", 00:05:44.150 "params": { 00:05:44.150 "period_us": 100000, 00:05:44.150 "enable": false 00:05:44.150 } 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "method": "bdev_wait_for_examine" 00:05:44.150 } 00:05:44.150 ] 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "subsystem": "scsi", 00:05:44.150 "config": null 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "subsystem": "scheduler", 00:05:44.150 "config": [ 00:05:44.150 { 00:05:44.150 "method": "framework_set_scheduler", 00:05:44.150 "params": { 00:05:44.150 "name": "static" 00:05:44.150 } 00:05:44.150 } 00:05:44.150 ] 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "subsystem": "vhost_scsi", 00:05:44.150 "config": [] 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "subsystem": "vhost_blk", 00:05:44.150 "config": [] 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "subsystem": "ublk", 00:05:44.150 "config": [] 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "subsystem": "nbd", 00:05:44.150 "config": [] 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "subsystem": "nvmf", 00:05:44.150 "config": [ 00:05:44.150 { 00:05:44.150 "method": "nvmf_set_config", 00:05:44.150 "params": { 00:05:44.150 "discovery_filter": "match_any", 00:05:44.150 "admin_cmd_passthru": { 00:05:44.150 "identify_ctrlr": false 00:05:44.150 } 00:05:44.150 } 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "method": "nvmf_set_max_subsystems", 00:05:44.150 "params": { 00:05:44.150 "max_subsystems": 1024 00:05:44.150 } 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "method": "nvmf_set_crdt", 00:05:44.150 "params": { 00:05:44.150 "crdt1": 0, 00:05:44.150 "crdt2": 0, 00:05:44.150 "crdt3": 0 00:05:44.150 } 00:05:44.150 }, 00:05:44.150 { 00:05:44.150 "method": "nvmf_create_transport", 00:05:44.150 "params": { 00:05:44.150 "trtype": "TCP", 00:05:44.150 "max_queue_depth": 128, 00:05:44.150 "max_io_qpairs_per_ctrlr": 127, 00:05:44.150 "in_capsule_data_size": 4096, 00:05:44.150 "max_io_size": 131072, 00:05:44.150 "io_unit_size": 131072, 00:05:44.151 "max_aq_depth": 128, 00:05:44.151 "num_shared_buffers": 511, 00:05:44.151 "buf_cache_size": 4294967295, 00:05:44.151 "dif_insert_or_strip": false, 00:05:44.151 "zcopy": false, 00:05:44.151 "c2h_success": true, 00:05:44.151 "sock_priority": 0, 00:05:44.151 "abort_timeout_sec": 1, 00:05:44.151 "ack_timeout": 0, 00:05:44.151 "data_wr_pool_size": 0 00:05:44.151 } 00:05:44.151 } 00:05:44.151 ] 00:05:44.151 }, 00:05:44.151 { 00:05:44.151 "subsystem": "iscsi", 00:05:44.151 "config": [ 00:05:44.151 { 00:05:44.151 "method": "iscsi_set_options", 00:05:44.151 "params": { 00:05:44.151 "node_base": "iqn.2016-06.io.spdk", 00:05:44.151 "max_sessions": 128, 00:05:44.151 "max_connections_per_session": 2, 00:05:44.151 "max_queue_depth": 64, 00:05:44.151 "default_time2wait": 2, 00:05:44.151 "default_time2retain": 20, 00:05:44.151 "first_burst_length": 8192, 00:05:44.151 "immediate_data": true, 00:05:44.151 "allow_duplicated_isid": false, 00:05:44.151 "error_recovery_level": 0, 00:05:44.151 "nop_timeout": 60, 00:05:44.151 "nop_in_interval": 30, 00:05:44.151 "disable_chap": false, 00:05:44.151 "require_chap": false, 00:05:44.151 "mutual_chap": false, 00:05:44.151 "chap_group": 0, 00:05:44.151 "max_large_datain_per_connection": 64, 00:05:44.151 "max_r2t_per_connection": 4, 00:05:44.151 "pdu_pool_size": 36864, 00:05:44.151 "immediate_data_pool_size": 16384, 00:05:44.151 "data_out_pool_size": 2048 00:05:44.151 } 00:05:44.151 } 00:05:44.151 ] 00:05:44.151 } 00:05:44.151 ] 00:05:44.151 } 00:05:44.151 00:21:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:44.151 00:21:10 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1791095 00:05:44.151 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 1791095 ']' 00:05:44.151 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 1791095 00:05:44.151 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:05:44.151 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:44.151 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1791095 00:05:44.452 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:44.452 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:44.452 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1791095' 00:05:44.452 killing process with pid 1791095 00:05:44.452 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 1791095 00:05:44.452 00:21:10 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 1791095 00:05:45.393 00:21:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1791442 00:05:45.393 00:21:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:45.393 00:21:11 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:05:50.677 00:21:16 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1791442 00:05:50.677 00:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@947 -- # '[' -z 1791442 ']' 00:05:50.677 00:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@951 -- # kill -0 1791442 00:05:50.677 00:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # uname 00:05:50.677 00:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:50.677 00:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1791442 00:05:50.677 00:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:50.677 00:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:50.677 00:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1791442' 00:05:50.677 killing process with pid 1791442 00:05:50.677 00:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # kill 1791442 00:05:50.677 00:21:16 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # wait 1791442 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:05:51.249 00:05:51.249 real 0m7.843s 00:05:51.249 user 0m7.392s 00:05:51.249 sys 0m0.798s 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:51.249 ************************************ 00:05:51.249 END TEST skip_rpc_with_json 00:05:51.249 ************************************ 00:05:51.249 00:21:17 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:51.249 00:21:17 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:51.249 00:21:17 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:51.249 00:21:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.249 ************************************ 00:05:51.249 START TEST skip_rpc_with_delay 00:05:51.249 ************************************ 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1122 -- # test_skip_rpc_with_delay 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@649 -- # local es=0 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:51.249 [2024-05-15 00:21:17.287373] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:51.249 [2024-05-15 00:21:17.287469] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # es=1 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:51.249 00:05:51.249 real 0m0.086s 00:05:51.249 user 0m0.041s 00:05:51.249 sys 0m0.044s 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:51.249 00:21:17 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:51.249 ************************************ 00:05:51.249 END TEST skip_rpc_with_delay 00:05:51.249 ************************************ 00:05:51.249 00:21:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:51.249 00:21:17 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:51.249 00:21:17 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:51.249 00:21:17 skip_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:51.249 00:21:17 skip_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:51.249 00:21:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:51.249 ************************************ 00:05:51.249 START TEST exit_on_failed_rpc_init 00:05:51.249 ************************************ 00:05:51.249 00:21:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1122 -- # test_exit_on_failed_rpc_init 00:05:51.249 00:21:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1792688 00:05:51.249 00:21:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1792688 00:05:51.249 00:21:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@828 -- # '[' -z 1792688 ']' 00:05:51.249 00:21:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.249 00:21:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:51.249 00:21:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.249 00:21:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:51.249 00:21:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:51.249 00:21:17 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:51.510 [2024-05-15 00:21:17.465330] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:51.510 [2024-05-15 00:21:17.465472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792688 ] 00:05:51.510 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.510 [2024-05-15 00:21:17.580259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.770 [2024-05-15 00:21:17.678487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@861 -- # return 0 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@649 -- # local es=0 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:52.031 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:52.290 [2024-05-15 00:21:18.211357] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:52.290 [2024-05-15 00:21:18.211437] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1792742 ] 00:05:52.290 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.290 [2024-05-15 00:21:18.322920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.552 [2024-05-15 00:21:18.480818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.552 [2024-05-15 00:21:18.480929] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:52.552 [2024-05-15 00:21:18.480952] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:52.552 [2024-05-15 00:21:18.480968] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:52.813 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # es=234 00:05:52.813 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:05:52.813 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # es=106 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # case "$es" in 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@669 -- # es=1 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1792688 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@947 -- # '[' -z 1792688 ']' 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@951 -- # kill -0 1792688 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # uname 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1792688 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1792688' 00:05:52.814 killing process with pid 1792688 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # kill 1792688 00:05:52.814 00:21:18 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # wait 1792688 00:05:53.754 00:05:53.754 real 0m2.260s 00:05:53.754 user 0m2.611s 00:05:53.754 sys 0m0.514s 00:05:53.754 00:21:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:53.754 00:21:19 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:53.754 ************************************ 00:05:53.754 END TEST exit_on_failed_rpc_init 00:05:53.754 ************************************ 00:05:53.754 00:21:19 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:05:53.754 00:05:53.754 real 0m16.419s 00:05:53.754 user 0m15.679s 00:05:53.754 sys 0m1.961s 00:05:53.754 00:21:19 skip_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:53.754 00:21:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.754 ************************************ 00:05:53.754 END TEST skip_rpc 00:05:53.754 ************************************ 00:05:53.754 00:21:19 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:53.754 00:21:19 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:53.754 00:21:19 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:53.754 00:21:19 -- common/autotest_common.sh@10 -- # set +x 00:05:53.754 ************************************ 00:05:53.754 START TEST rpc_client 00:05:53.754 ************************************ 00:05:53.754 00:21:19 rpc_client -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:53.754 * Looking for test storage... 00:05:53.754 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client 00:05:53.754 00:21:19 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:53.754 OK 00:05:53.754 00:21:19 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:53.754 00:05:53.754 real 0m0.134s 00:05:53.754 user 0m0.060s 00:05:53.754 sys 0m0.082s 00:05:53.754 00:21:19 rpc_client -- common/autotest_common.sh@1123 -- # xtrace_disable 00:05:53.754 00:21:19 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:53.754 ************************************ 00:05:53.754 END TEST rpc_client 00:05:53.754 ************************************ 00:05:53.754 00:21:19 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:05:53.754 00:21:19 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:05:53.754 00:21:19 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:05:53.754 00:21:19 -- common/autotest_common.sh@10 -- # set +x 00:05:54.016 ************************************ 00:05:54.016 START TEST json_config 00:05:54.016 ************************************ 00:05:54.016 00:21:19 json_config -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:05:54.016 00:21:20 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:54.016 00:21:20 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:54.016 00:21:20 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:54.016 00:21:20 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.016 00:21:20 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.016 00:21:20 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.016 00:21:20 json_config -- paths/export.sh@5 -- # export PATH 00:05:54.016 00:21:20 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@47 -- # : 0 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:54.016 00:21:20 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/common.sh 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json') 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:54.016 INFO: JSON configuration test init 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:54.016 00:21:20 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:54.016 00:21:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:54.016 00:21:20 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:54.016 00:21:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.016 00:21:20 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:54.016 00:21:20 json_config -- json_config/common.sh@9 -- # local app=target 00:05:54.016 00:21:20 json_config -- json_config/common.sh@10 -- # shift 00:05:54.016 00:21:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:54.016 00:21:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:54.016 00:21:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:54.017 00:21:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.017 00:21:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:54.017 00:21:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1793347 00:05:54.017 00:21:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:54.017 Waiting for target to run... 00:05:54.017 00:21:20 json_config -- json_config/common.sh@25 -- # waitforlisten 1793347 /var/tmp/spdk_tgt.sock 00:05:54.017 00:21:20 json_config -- common/autotest_common.sh@828 -- # '[' -z 1793347 ']' 00:05:54.017 00:21:20 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:54.017 00:21:20 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:05:54.017 00:21:20 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:54.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:54.017 00:21:20 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:05:54.017 00:21:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.017 00:21:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:54.017 [2024-05-15 00:21:20.139471] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:05:54.017 [2024-05-15 00:21:20.139615] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1793347 ] 00:05:54.277 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.535 [2024-05-15 00:21:20.528056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.535 [2024-05-15 00:21:20.610980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.794 00:21:20 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:05:54.795 00:21:20 json_config -- common/autotest_common.sh@861 -- # return 0 00:05:54.795 00:21:20 json_config -- json_config/common.sh@26 -- # echo '' 00:05:54.795 00:05:54.795 00:21:20 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:54.795 00:21:20 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:54.795 00:21:20 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:05:54.795 00:21:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.795 00:21:20 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:54.795 00:21:20 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:54.795 00:21:20 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:05:54.795 00:21:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.795 00:21:20 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:54.795 00:21:20 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:54.795 00:21:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:01.375 00:21:26 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:01.375 00:21:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:01.375 00:21:26 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:01.375 00:21:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.375 00:21:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:01.375 00:21:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:01.375 00:21:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:01.375 00:21:26 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:01.375 00:21:26 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:01.375 00:21:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:01.375 00:21:27 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:01.375 00:21:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:01.375 00:21:27 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:01.375 00:21:27 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:01.375 00:21:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:01.375 MallocForNvmf0 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:01.375 00:21:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:01.375 MallocForNvmf1 00:06:01.375 00:21:27 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:01.375 00:21:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:01.637 [2024-05-15 00:21:27.617342] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.637 00:21:27 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:01.637 00:21:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:01.637 00:21:27 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:01.637 00:21:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:01.897 00:21:27 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:01.897 00:21:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:02.158 00:21:28 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:02.158 00:21:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:02.158 [2024-05-15 00:21:28.245442] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:02.158 [2024-05-15 00:21:28.245863] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:02.158 00:21:28 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:02.158 00:21:28 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:02.158 00:21:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.158 00:21:28 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:02.158 00:21:28 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:02.158 00:21:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.417 00:21:28 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:02.417 00:21:28 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:02.417 00:21:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:02.417 MallocBdevForConfigChangeCheck 00:06:02.417 00:21:28 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:02.417 00:21:28 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:02.417 00:21:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:02.417 00:21:28 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:02.417 00:21:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:02.677 00:21:28 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:02.677 INFO: shutting down applications... 00:06:02.677 00:21:28 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:02.677 00:21:28 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:02.677 00:21:28 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:02.677 00:21:28 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:07.965 Calling clear_iscsi_subsystem 00:06:07.965 Calling clear_nvmf_subsystem 00:06:07.965 Calling clear_nbd_subsystem 00:06:07.965 Calling clear_ublk_subsystem 00:06:07.965 Calling clear_vhost_blk_subsystem 00:06:07.965 Calling clear_vhost_scsi_subsystem 00:06:07.965 Calling clear_bdev_subsystem 00:06:07.965 00:21:33 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py 00:06:07.965 00:21:33 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:07.965 00:21:33 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:07.965 00:21:33 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:07.965 00:21:33 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:07.965 00:21:33 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:07.965 00:21:33 json_config -- json_config/json_config.sh@345 -- # break 00:06:07.965 00:21:33 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:07.965 00:21:33 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:07.965 00:21:33 json_config -- json_config/common.sh@31 -- # local app=target 00:06:07.965 00:21:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:07.965 00:21:33 json_config -- json_config/common.sh@35 -- # [[ -n 1793347 ]] 00:06:07.965 00:21:33 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1793347 00:06:07.965 [2024-05-15 00:21:33.872291] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:07.965 00:21:33 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:07.965 00:21:33 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:07.965 00:21:33 json_config -- json_config/common.sh@41 -- # kill -0 1793347 00:06:07.965 00:21:33 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.224 00:21:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.224 00:21:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.224 00:21:34 json_config -- json_config/common.sh@41 -- # kill -0 1793347 00:06:08.224 00:21:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.224 00:21:34 json_config -- json_config/common.sh@43 -- # break 00:06:08.224 00:21:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.224 00:21:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.224 SPDK target shutdown done 00:06:08.224 00:21:34 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:08.224 INFO: relaunching applications... 00:06:08.224 00:21:34 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.224 00:21:34 json_config -- json_config/common.sh@9 -- # local app=target 00:06:08.224 00:21:34 json_config -- json_config/common.sh@10 -- # shift 00:06:08.224 00:21:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.224 00:21:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.224 00:21:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.224 00:21:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.224 00:21:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.224 00:21:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1796224 00:06:08.224 00:21:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.224 Waiting for target to run... 00:06:08.224 00:21:34 json_config -- json_config/common.sh@25 -- # waitforlisten 1796224 /var/tmp/spdk_tgt.sock 00:06:08.224 00:21:34 json_config -- common/autotest_common.sh@828 -- # '[' -z 1796224 ']' 00:06:08.224 00:21:34 json_config -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.224 00:21:34 json_config -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:08.224 00:21:34 json_config -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.224 00:21:34 json_config -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:08.224 00:21:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.224 00:21:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.484 [2024-05-15 00:21:34.479710] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:08.484 [2024-05-15 00:21:34.479848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1796224 ] 00:06:08.484 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.053 [2024-05-15 00:21:34.974781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.053 [2024-05-15 00:21:35.071902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.626 [2024-05-15 00:21:41.176214] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:15.626 [2024-05-15 00:21:41.208107] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:06:15.626 [2024-05-15 00:21:41.208528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:15.626 00:21:41 json_config -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:15.626 00:21:41 json_config -- common/autotest_common.sh@861 -- # return 0 00:06:15.626 00:21:41 json_config -- json_config/common.sh@26 -- # echo '' 00:06:15.626 00:06:15.626 00:21:41 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:15.626 00:21:41 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:15.626 INFO: Checking if target configuration is the same... 00:06:15.626 00:21:41 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.626 00:21:41 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:15.626 00:21:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.626 + '[' 2 -ne 2 ']' 00:06:15.626 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:15.626 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:06:15.626 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:06:15.626 +++ basename /dev/fd/62 00:06:15.626 ++ mktemp /tmp/62.XXX 00:06:15.626 + tmp_file_1=/tmp/62.a2m 00:06:15.626 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.626 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:15.626 + tmp_file_2=/tmp/spdk_tgt_config.json.SCR 00:06:15.626 + ret=0 00:06:15.626 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.626 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.626 + diff -u /tmp/62.a2m /tmp/spdk_tgt_config.json.SCR 00:06:15.626 + echo 'INFO: JSON config files are the same' 00:06:15.626 INFO: JSON config files are the same 00:06:15.626 + rm /tmp/62.a2m /tmp/spdk_tgt_config.json.SCR 00:06:15.626 + exit 0 00:06:15.626 00:21:41 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:15.626 00:21:41 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:15.626 INFO: changing configuration and checking if this can be detected... 00:06:15.626 00:21:41 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.626 00:21:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:15.626 00:21:41 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:15.626 00:21:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:15.626 00:21:41 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.626 + '[' 2 -ne 2 ']' 00:06:15.626 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:15.626 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:06:15.626 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:06:15.626 +++ basename /dev/fd/62 00:06:15.626 ++ mktemp /tmp/62.XXX 00:06:15.626 + tmp_file_1=/tmp/62.vFt 00:06:15.626 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:15.626 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:15.626 + tmp_file_2=/tmp/spdk_tgt_config.json.1zP 00:06:15.626 + ret=0 00:06:15.626 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:15.886 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:16.146 + diff -u /tmp/62.vFt /tmp/spdk_tgt_config.json.1zP 00:06:16.146 + ret=1 00:06:16.146 + echo '=== Start of file: /tmp/62.vFt ===' 00:06:16.146 + cat /tmp/62.vFt 00:06:16.146 + echo '=== End of file: /tmp/62.vFt ===' 00:06:16.146 + echo '' 00:06:16.146 + echo '=== Start of file: /tmp/spdk_tgt_config.json.1zP ===' 00:06:16.146 + cat /tmp/spdk_tgt_config.json.1zP 00:06:16.146 + echo '=== End of file: /tmp/spdk_tgt_config.json.1zP ===' 00:06:16.146 + echo '' 00:06:16.146 + rm /tmp/62.vFt /tmp/spdk_tgt_config.json.1zP 00:06:16.146 + exit 1 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:16.146 INFO: configuration change detected. 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@317 -- # [[ -n 1796224 ]] 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:16.146 00:21:42 json_config -- json_config/json_config.sh@323 -- # killprocess 1796224 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@947 -- # '[' -z 1796224 ']' 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@951 -- # kill -0 1796224 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@952 -- # uname 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1796224 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1796224' 00:06:16.146 killing process with pid 1796224 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@966 -- # kill 1796224 00:06:16.146 [2024-05-15 00:21:42.186050] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:06:16.146 00:21:42 json_config -- common/autotest_common.sh@971 -- # wait 1796224 00:06:19.435 00:21:45 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:06:19.435 00:21:45 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:19.435 00:21:45 json_config -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:19.435 00:21:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.435 00:21:45 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:19.435 00:21:45 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:19.435 INFO: Success 00:06:19.435 00:06:19.435 real 0m25.302s 00:06:19.435 user 0m24.108s 00:06:19.435 sys 0m2.391s 00:06:19.435 00:21:45 json_config -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:19.435 00:21:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:19.435 ************************************ 00:06:19.435 END TEST json_config 00:06:19.435 ************************************ 00:06:19.435 00:21:45 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:19.435 00:21:45 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:19.435 00:21:45 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:19.435 00:21:45 -- common/autotest_common.sh@10 -- # set +x 00:06:19.435 ************************************ 00:06:19.435 START TEST json_config_extra_key 00:06:19.435 ************************************ 00:06:19.435 00:21:45 json_config_extra_key -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:19.435 00:21:45 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:19.435 00:21:45 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:06:19.435 00:21:45 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:19.435 00:21:45 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:19.435 00:21:45 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:19.435 00:21:45 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.436 00:21:45 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.436 00:21:45 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.436 00:21:45 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:19.436 00:21:45 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:19.436 00:21:45 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:19.436 00:21:45 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:19.436 00:21:45 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:19.436 00:21:45 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:19.436 00:21:45 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:19.436 00:21:45 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:19.436 00:21:45 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:19.436 00:21:45 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:19.436 00:21:45 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:19.436 00:21:45 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/common.sh 00:06:19.436 00:21:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:19.436 00:21:45 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:19.436 00:21:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:19.436 00:21:45 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:19.436 00:21:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:19.436 00:21:45 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:19.436 00:21:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:19.436 00:21:45 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:19.436 00:21:45 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:19.436 00:21:45 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:19.436 INFO: launching applications... 00:06:19.436 00:21:45 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:06:19.436 00:21:45 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:19.436 00:21:45 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:19.436 00:21:45 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:19.436 00:21:45 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:19.436 00:21:45 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:19.436 00:21:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.436 00:21:45 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:19.436 00:21:45 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1798441 00:06:19.436 00:21:45 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:19.436 Waiting for target to run... 00:06:19.436 00:21:45 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1798441 /var/tmp/spdk_tgt.sock 00:06:19.436 00:21:45 json_config_extra_key -- common/autotest_common.sh@828 -- # '[' -z 1798441 ']' 00:06:19.436 00:21:45 json_config_extra_key -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:19.436 00:21:45 json_config_extra_key -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:19.436 00:21:45 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:06:19.436 00:21:45 json_config_extra_key -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:19.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:19.436 00:21:45 json_config_extra_key -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:19.436 00:21:45 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:19.436 [2024-05-15 00:21:45.472854] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:19.436 [2024-05-15 00:21:45.472985] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1798441 ] 00:06:19.436 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.005 [2024-05-15 00:21:45.997782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.005 [2024-05-15 00:21:46.093576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.265 00:21:46 json_config_extra_key -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:20.265 00:21:46 json_config_extra_key -- common/autotest_common.sh@861 -- # return 0 00:06:20.265 00:21:46 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:20.265 00:06:20.265 00:21:46 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:20.265 INFO: shutting down applications... 00:06:20.265 00:21:46 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:20.265 00:21:46 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:20.265 00:21:46 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:20.265 00:21:46 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1798441 ]] 00:06:20.265 00:21:46 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1798441 00:06:20.265 00:21:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:20.265 00:21:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.265 00:21:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1798441 00:06:20.265 00:21:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.835 00:21:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.835 00:21:46 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.835 00:21:46 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1798441 00:06:20.835 00:21:46 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.403 00:21:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.403 00:21:47 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.403 00:21:47 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1798441 00:06:21.403 00:21:47 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:21.403 00:21:47 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:21.403 00:21:47 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:21.403 00:21:47 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:21.403 SPDK target shutdown done 00:06:21.403 00:21:47 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:21.403 Success 00:06:21.403 00:06:21.403 real 0m2.076s 00:06:21.403 user 0m1.556s 00:06:21.403 sys 0m0.682s 00:06:21.403 00:21:47 json_config_extra_key -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:21.403 00:21:47 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:21.403 ************************************ 00:06:21.403 END TEST json_config_extra_key 00:06:21.403 ************************************ 00:06:21.403 00:21:47 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:21.403 00:21:47 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:21.403 00:21:47 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:21.403 00:21:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.403 ************************************ 00:06:21.403 START TEST alias_rpc 00:06:21.403 ************************************ 00:06:21.403 00:21:47 alias_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:21.403 * Looking for test storage... 00:06:21.403 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc 00:06:21.403 00:21:47 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:21.403 00:21:47 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1798947 00:06:21.403 00:21:47 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1798947 00:06:21.403 00:21:47 alias_rpc -- common/autotest_common.sh@828 -- # '[' -z 1798947 ']' 00:06:21.403 00:21:47 alias_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.403 00:21:47 alias_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:21.403 00:21:47 alias_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.403 00:21:47 alias_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:21.403 00:21:47 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.403 00:21:47 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:06:21.663 [2024-05-15 00:21:47.633890] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:21.663 [2024-05-15 00:21:47.634013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1798947 ] 00:06:21.663 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.663 [2024-05-15 00:21:47.750522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.923 [2024-05-15 00:21:47.849493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.183 00:21:48 alias_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:22.183 00:21:48 alias_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:22.183 00:21:48 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:22.443 00:21:48 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1798947 00:06:22.443 00:21:48 alias_rpc -- common/autotest_common.sh@947 -- # '[' -z 1798947 ']' 00:06:22.443 00:21:48 alias_rpc -- common/autotest_common.sh@951 -- # kill -0 1798947 00:06:22.443 00:21:48 alias_rpc -- common/autotest_common.sh@952 -- # uname 00:06:22.443 00:21:48 alias_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:22.443 00:21:48 alias_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1798947 00:06:22.443 00:21:48 alias_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:22.443 00:21:48 alias_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:22.443 00:21:48 alias_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1798947' 00:06:22.443 killing process with pid 1798947 00:06:22.443 00:21:48 alias_rpc -- common/autotest_common.sh@966 -- # kill 1798947 00:06:22.443 00:21:48 alias_rpc -- common/autotest_common.sh@971 -- # wait 1798947 00:06:23.424 00:06:23.424 real 0m1.951s 00:06:23.424 user 0m1.915s 00:06:23.424 sys 0m0.482s 00:06:23.424 00:21:49 alias_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:23.424 00:21:49 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.424 ************************************ 00:06:23.424 END TEST alias_rpc 00:06:23.424 ************************************ 00:06:23.424 00:21:49 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:06:23.424 00:21:49 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:23.424 00:21:49 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:23.424 00:21:49 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:23.424 00:21:49 -- common/autotest_common.sh@10 -- # set +x 00:06:23.424 ************************************ 00:06:23.424 START TEST spdkcli_tcp 00:06:23.424 ************************************ 00:06:23.424 00:21:49 spdkcli_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:23.424 * Looking for test storage... 00:06:23.424 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:06:23.424 00:21:49 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:06:23.424 00:21:49 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:23.424 00:21:49 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:06:23.424 00:21:49 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:23.424 00:21:49 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:23.424 00:21:49 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:23.424 00:21:49 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:23.424 00:21:49 spdkcli_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:06:23.424 00:21:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.424 00:21:49 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1799434 00:06:23.424 00:21:49 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1799434 00:06:23.424 00:21:49 spdkcli_tcp -- common/autotest_common.sh@828 -- # '[' -z 1799434 ']' 00:06:23.424 00:21:49 spdkcli_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.424 00:21:49 spdkcli_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:23.424 00:21:49 spdkcli_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.424 00:21:49 spdkcli_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:23.424 00:21:49 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:23.424 00:21:49 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:23.710 [2024-05-15 00:21:49.657055] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:23.710 [2024-05-15 00:21:49.657182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1799434 ] 00:06:23.710 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.710 [2024-05-15 00:21:49.788294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.970 [2024-05-15 00:21:49.888237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.970 [2024-05-15 00:21:49.888251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.231 00:21:50 spdkcli_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:24.231 00:21:50 spdkcli_tcp -- common/autotest_common.sh@861 -- # return 0 00:06:24.231 00:21:50 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1799468 00:06:24.231 00:21:50 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:24.231 00:21:50 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:24.491 [ 00:06:24.491 "bdev_malloc_delete", 00:06:24.491 "bdev_malloc_create", 00:06:24.491 "bdev_null_resize", 00:06:24.491 "bdev_null_delete", 00:06:24.491 "bdev_null_create", 00:06:24.491 "bdev_nvme_cuse_unregister", 00:06:24.491 "bdev_nvme_cuse_register", 00:06:24.491 "bdev_opal_new_user", 00:06:24.491 "bdev_opal_set_lock_state", 00:06:24.491 "bdev_opal_delete", 00:06:24.491 "bdev_opal_get_info", 00:06:24.491 "bdev_opal_create", 00:06:24.491 "bdev_nvme_opal_revert", 00:06:24.491 "bdev_nvme_opal_init", 00:06:24.491 "bdev_nvme_send_cmd", 00:06:24.491 "bdev_nvme_get_path_iostat", 00:06:24.491 "bdev_nvme_get_mdns_discovery_info", 00:06:24.491 "bdev_nvme_stop_mdns_discovery", 00:06:24.491 "bdev_nvme_start_mdns_discovery", 00:06:24.491 "bdev_nvme_set_multipath_policy", 00:06:24.491 "bdev_nvme_set_preferred_path", 00:06:24.491 "bdev_nvme_get_io_paths", 00:06:24.491 "bdev_nvme_remove_error_injection", 00:06:24.491 "bdev_nvme_add_error_injection", 00:06:24.491 "bdev_nvme_get_discovery_info", 00:06:24.491 "bdev_nvme_stop_discovery", 00:06:24.491 "bdev_nvme_start_discovery", 00:06:24.491 "bdev_nvme_get_controller_health_info", 00:06:24.491 "bdev_nvme_disable_controller", 00:06:24.491 "bdev_nvme_enable_controller", 00:06:24.491 "bdev_nvme_reset_controller", 00:06:24.491 "bdev_nvme_get_transport_statistics", 00:06:24.491 "bdev_nvme_apply_firmware", 00:06:24.491 "bdev_nvme_detach_controller", 00:06:24.491 "bdev_nvme_get_controllers", 00:06:24.491 "bdev_nvme_attach_controller", 00:06:24.491 "bdev_nvme_set_hotplug", 00:06:24.491 "bdev_nvme_set_options", 00:06:24.491 "bdev_passthru_delete", 00:06:24.491 "bdev_passthru_create", 00:06:24.491 "bdev_lvol_check_shallow_copy", 00:06:24.491 "bdev_lvol_start_shallow_copy", 00:06:24.491 "bdev_lvol_grow_lvstore", 00:06:24.491 "bdev_lvol_get_lvols", 00:06:24.491 "bdev_lvol_get_lvstores", 00:06:24.491 "bdev_lvol_delete", 00:06:24.491 "bdev_lvol_set_read_only", 00:06:24.491 "bdev_lvol_resize", 00:06:24.491 "bdev_lvol_decouple_parent", 00:06:24.491 "bdev_lvol_inflate", 00:06:24.491 "bdev_lvol_rename", 00:06:24.491 "bdev_lvol_clone_bdev", 00:06:24.491 "bdev_lvol_clone", 00:06:24.491 "bdev_lvol_snapshot", 00:06:24.491 "bdev_lvol_create", 00:06:24.491 "bdev_lvol_delete_lvstore", 00:06:24.491 "bdev_lvol_rename_lvstore", 00:06:24.491 "bdev_lvol_create_lvstore", 00:06:24.491 "bdev_raid_set_options", 00:06:24.491 "bdev_raid_remove_base_bdev", 00:06:24.491 "bdev_raid_add_base_bdev", 00:06:24.491 "bdev_raid_delete", 00:06:24.491 "bdev_raid_create", 00:06:24.491 "bdev_raid_get_bdevs", 00:06:24.491 "bdev_error_inject_error", 00:06:24.491 "bdev_error_delete", 00:06:24.491 "bdev_error_create", 00:06:24.491 "bdev_split_delete", 00:06:24.491 "bdev_split_create", 00:06:24.491 "bdev_delay_delete", 00:06:24.491 "bdev_delay_create", 00:06:24.491 "bdev_delay_update_latency", 00:06:24.491 "bdev_zone_block_delete", 00:06:24.491 "bdev_zone_block_create", 00:06:24.491 "blobfs_create", 00:06:24.491 "blobfs_detect", 00:06:24.491 "blobfs_set_cache_size", 00:06:24.491 "bdev_aio_delete", 00:06:24.491 "bdev_aio_rescan", 00:06:24.491 "bdev_aio_create", 00:06:24.491 "bdev_ftl_set_property", 00:06:24.491 "bdev_ftl_get_properties", 00:06:24.491 "bdev_ftl_get_stats", 00:06:24.491 "bdev_ftl_unmap", 00:06:24.491 "bdev_ftl_unload", 00:06:24.491 "bdev_ftl_delete", 00:06:24.491 "bdev_ftl_load", 00:06:24.491 "bdev_ftl_create", 00:06:24.491 "bdev_virtio_attach_controller", 00:06:24.491 "bdev_virtio_scsi_get_devices", 00:06:24.491 "bdev_virtio_detach_controller", 00:06:24.491 "bdev_virtio_blk_set_hotplug", 00:06:24.491 "bdev_iscsi_delete", 00:06:24.491 "bdev_iscsi_create", 00:06:24.491 "bdev_iscsi_set_options", 00:06:24.491 "accel_error_inject_error", 00:06:24.491 "ioat_scan_accel_module", 00:06:24.491 "dsa_scan_accel_module", 00:06:24.491 "iaa_scan_accel_module", 00:06:24.491 "keyring_file_remove_key", 00:06:24.491 "keyring_file_add_key", 00:06:24.491 "iscsi_get_histogram", 00:06:24.491 "iscsi_enable_histogram", 00:06:24.491 "iscsi_set_options", 00:06:24.491 "iscsi_get_auth_groups", 00:06:24.491 "iscsi_auth_group_remove_secret", 00:06:24.491 "iscsi_auth_group_add_secret", 00:06:24.491 "iscsi_delete_auth_group", 00:06:24.491 "iscsi_create_auth_group", 00:06:24.491 "iscsi_set_discovery_auth", 00:06:24.491 "iscsi_get_options", 00:06:24.491 "iscsi_target_node_request_logout", 00:06:24.491 "iscsi_target_node_set_redirect", 00:06:24.491 "iscsi_target_node_set_auth", 00:06:24.491 "iscsi_target_node_add_lun", 00:06:24.491 "iscsi_get_stats", 00:06:24.491 "iscsi_get_connections", 00:06:24.491 "iscsi_portal_group_set_auth", 00:06:24.491 "iscsi_start_portal_group", 00:06:24.491 "iscsi_delete_portal_group", 00:06:24.491 "iscsi_create_portal_group", 00:06:24.491 "iscsi_get_portal_groups", 00:06:24.491 "iscsi_delete_target_node", 00:06:24.491 "iscsi_target_node_remove_pg_ig_maps", 00:06:24.491 "iscsi_target_node_add_pg_ig_maps", 00:06:24.491 "iscsi_create_target_node", 00:06:24.491 "iscsi_get_target_nodes", 00:06:24.491 "iscsi_delete_initiator_group", 00:06:24.491 "iscsi_initiator_group_remove_initiators", 00:06:24.491 "iscsi_initiator_group_add_initiators", 00:06:24.491 "iscsi_create_initiator_group", 00:06:24.491 "iscsi_get_initiator_groups", 00:06:24.491 "nvmf_set_crdt", 00:06:24.491 "nvmf_set_config", 00:06:24.491 "nvmf_set_max_subsystems", 00:06:24.491 "nvmf_stop_mdns_prr", 00:06:24.491 "nvmf_publish_mdns_prr", 00:06:24.491 "nvmf_subsystem_get_listeners", 00:06:24.491 "nvmf_subsystem_get_qpairs", 00:06:24.491 "nvmf_subsystem_get_controllers", 00:06:24.491 "nvmf_get_stats", 00:06:24.491 "nvmf_get_transports", 00:06:24.491 "nvmf_create_transport", 00:06:24.491 "nvmf_get_targets", 00:06:24.491 "nvmf_delete_target", 00:06:24.491 "nvmf_create_target", 00:06:24.491 "nvmf_subsystem_allow_any_host", 00:06:24.491 "nvmf_subsystem_remove_host", 00:06:24.491 "nvmf_subsystem_add_host", 00:06:24.491 "nvmf_ns_remove_host", 00:06:24.491 "nvmf_ns_add_host", 00:06:24.491 "nvmf_subsystem_remove_ns", 00:06:24.491 "nvmf_subsystem_add_ns", 00:06:24.491 "nvmf_subsystem_listener_set_ana_state", 00:06:24.491 "nvmf_discovery_get_referrals", 00:06:24.491 "nvmf_discovery_remove_referral", 00:06:24.491 "nvmf_discovery_add_referral", 00:06:24.491 "nvmf_subsystem_remove_listener", 00:06:24.491 "nvmf_subsystem_add_listener", 00:06:24.491 "nvmf_delete_subsystem", 00:06:24.491 "nvmf_create_subsystem", 00:06:24.491 "nvmf_get_subsystems", 00:06:24.491 "env_dpdk_get_mem_stats", 00:06:24.491 "nbd_get_disks", 00:06:24.491 "nbd_stop_disk", 00:06:24.491 "nbd_start_disk", 00:06:24.491 "ublk_recover_disk", 00:06:24.491 "ublk_get_disks", 00:06:24.491 "ublk_stop_disk", 00:06:24.491 "ublk_start_disk", 00:06:24.491 "ublk_destroy_target", 00:06:24.491 "ublk_create_target", 00:06:24.491 "virtio_blk_create_transport", 00:06:24.491 "virtio_blk_get_transports", 00:06:24.491 "vhost_controller_set_coalescing", 00:06:24.491 "vhost_get_controllers", 00:06:24.491 "vhost_delete_controller", 00:06:24.491 "vhost_create_blk_controller", 00:06:24.491 "vhost_scsi_controller_remove_target", 00:06:24.491 "vhost_scsi_controller_add_target", 00:06:24.491 "vhost_start_scsi_controller", 00:06:24.491 "vhost_create_scsi_controller", 00:06:24.491 "thread_set_cpumask", 00:06:24.491 "framework_get_scheduler", 00:06:24.491 "framework_set_scheduler", 00:06:24.491 "framework_get_reactors", 00:06:24.491 "thread_get_io_channels", 00:06:24.491 "thread_get_pollers", 00:06:24.491 "thread_get_stats", 00:06:24.491 "framework_monitor_context_switch", 00:06:24.491 "spdk_kill_instance", 00:06:24.491 "log_enable_timestamps", 00:06:24.491 "log_get_flags", 00:06:24.491 "log_clear_flag", 00:06:24.491 "log_set_flag", 00:06:24.491 "log_get_level", 00:06:24.491 "log_set_level", 00:06:24.491 "log_get_print_level", 00:06:24.491 "log_set_print_level", 00:06:24.491 "framework_enable_cpumask_locks", 00:06:24.491 "framework_disable_cpumask_locks", 00:06:24.491 "framework_wait_init", 00:06:24.491 "framework_start_init", 00:06:24.491 "scsi_get_devices", 00:06:24.491 "bdev_get_histogram", 00:06:24.491 "bdev_enable_histogram", 00:06:24.491 "bdev_set_qos_limit", 00:06:24.491 "bdev_set_qd_sampling_period", 00:06:24.491 "bdev_get_bdevs", 00:06:24.491 "bdev_reset_iostat", 00:06:24.491 "bdev_get_iostat", 00:06:24.491 "bdev_examine", 00:06:24.491 "bdev_wait_for_examine", 00:06:24.491 "bdev_set_options", 00:06:24.491 "notify_get_notifications", 00:06:24.491 "notify_get_types", 00:06:24.491 "accel_get_stats", 00:06:24.491 "accel_set_options", 00:06:24.491 "accel_set_driver", 00:06:24.491 "accel_crypto_key_destroy", 00:06:24.491 "accel_crypto_keys_get", 00:06:24.491 "accel_crypto_key_create", 00:06:24.491 "accel_assign_opc", 00:06:24.491 "accel_get_module_info", 00:06:24.491 "accel_get_opc_assignments", 00:06:24.491 "vmd_rescan", 00:06:24.491 "vmd_remove_device", 00:06:24.491 "vmd_enable", 00:06:24.491 "sock_get_default_impl", 00:06:24.491 "sock_set_default_impl", 00:06:24.491 "sock_impl_set_options", 00:06:24.491 "sock_impl_get_options", 00:06:24.491 "iobuf_get_stats", 00:06:24.492 "iobuf_set_options", 00:06:24.492 "framework_get_pci_devices", 00:06:24.492 "framework_get_config", 00:06:24.492 "framework_get_subsystems", 00:06:24.492 "trace_get_info", 00:06:24.492 "trace_get_tpoint_group_mask", 00:06:24.492 "trace_disable_tpoint_group", 00:06:24.492 "trace_enable_tpoint_group", 00:06:24.492 "trace_clear_tpoint_mask", 00:06:24.492 "trace_set_tpoint_mask", 00:06:24.492 "keyring_get_keys", 00:06:24.492 "spdk_get_version", 00:06:24.492 "rpc_get_methods" 00:06:24.492 ] 00:06:24.492 00:21:50 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:24.492 00:21:50 spdkcli_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:06:24.492 00:21:50 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:24.492 00:21:50 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:24.492 00:21:50 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1799434 00:06:24.492 00:21:50 spdkcli_tcp -- common/autotest_common.sh@947 -- # '[' -z 1799434 ']' 00:06:24.492 00:21:50 spdkcli_tcp -- common/autotest_common.sh@951 -- # kill -0 1799434 00:06:24.492 00:21:50 spdkcli_tcp -- common/autotest_common.sh@952 -- # uname 00:06:24.492 00:21:50 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:24.492 00:21:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1799434 00:06:24.492 00:21:50 spdkcli_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:24.492 00:21:50 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:24.492 00:21:50 spdkcli_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1799434' 00:06:24.492 killing process with pid 1799434 00:06:24.492 00:21:50 spdkcli_tcp -- common/autotest_common.sh@966 -- # kill 1799434 00:06:24.492 00:21:50 spdkcli_tcp -- common/autotest_common.sh@971 -- # wait 1799434 00:06:25.433 00:06:25.433 real 0m1.971s 00:06:25.433 user 0m3.335s 00:06:25.433 sys 0m0.525s 00:06:25.433 00:21:51 spdkcli_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:25.433 00:21:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:25.433 ************************************ 00:06:25.433 END TEST spdkcli_tcp 00:06:25.433 ************************************ 00:06:25.433 00:21:51 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:25.433 00:21:51 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:25.433 00:21:51 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:25.433 00:21:51 -- common/autotest_common.sh@10 -- # set +x 00:06:25.433 ************************************ 00:06:25.433 START TEST dpdk_mem_utility 00:06:25.433 ************************************ 00:06:25.433 00:21:51 dpdk_mem_utility -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:25.433 * Looking for test storage... 00:06:25.433 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility 00:06:25.433 00:21:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:25.433 00:21:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1799818 00:06:25.433 00:21:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1799818 00:06:25.433 00:21:51 dpdk_mem_utility -- common/autotest_common.sh@828 -- # '[' -z 1799818 ']' 00:06:25.433 00:21:51 dpdk_mem_utility -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.433 00:21:51 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:25.433 00:21:51 dpdk_mem_utility -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.433 00:21:51 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:06:25.433 00:21:51 dpdk_mem_utility -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:25.433 00:21:51 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:25.694 [2024-05-15 00:21:51.699564] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:25.694 [2024-05-15 00:21:51.699702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1799818 ] 00:06:25.694 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.694 [2024-05-15 00:21:51.833360] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.954 [2024-05-15 00:21:51.933544] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.525 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:26.525 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@861 -- # return 0 00:06:26.525 00:21:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:26.525 00:21:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:26.525 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:26.525 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:26.525 { 00:06:26.525 "filename": "/tmp/spdk_mem_dump.txt" 00:06:26.525 } 00:06:26.525 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:26.525 00:21:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:26.525 DPDK memory size 820.000000 MiB in 1 heap(s) 00:06:26.525 1 heaps totaling size 820.000000 MiB 00:06:26.525 size: 820.000000 MiB heap id: 0 00:06:26.525 end heaps---------- 00:06:26.525 8 mempools totaling size 598.116089 MiB 00:06:26.525 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:26.525 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:26.525 size: 84.521057 MiB name: bdev_io_1799818 00:06:26.525 size: 51.011292 MiB name: evtpool_1799818 00:06:26.525 size: 50.003479 MiB name: msgpool_1799818 00:06:26.525 size: 21.763794 MiB name: PDU_Pool 00:06:26.525 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:26.525 size: 0.026123 MiB name: Session_Pool 00:06:26.525 end mempools------- 00:06:26.525 6 memzones totaling size 4.142822 MiB 00:06:26.525 size: 1.000366 MiB name: RG_ring_0_1799818 00:06:26.525 size: 1.000366 MiB name: RG_ring_1_1799818 00:06:26.525 size: 1.000366 MiB name: RG_ring_4_1799818 00:06:26.525 size: 1.000366 MiB name: RG_ring_5_1799818 00:06:26.525 size: 0.125366 MiB name: RG_ring_2_1799818 00:06:26.525 size: 0.015991 MiB name: RG_ring_3_1799818 00:06:26.525 end memzones------- 00:06:26.525 00:21:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:26.525 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:06:26.525 list of free elements. size: 18.514832 MiB 00:06:26.525 element at address: 0x200000400000 with size: 1.999451 MiB 00:06:26.525 element at address: 0x200000800000 with size: 1.996887 MiB 00:06:26.525 element at address: 0x200007000000 with size: 1.995972 MiB 00:06:26.525 element at address: 0x20000b200000 with size: 1.995972 MiB 00:06:26.525 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:26.525 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:26.525 element at address: 0x200019600000 with size: 0.999329 MiB 00:06:26.525 element at address: 0x200003e00000 with size: 0.996094 MiB 00:06:26.525 element at address: 0x200032200000 with size: 0.994324 MiB 00:06:26.525 element at address: 0x200018e00000 with size: 0.959900 MiB 00:06:26.525 element at address: 0x200019900040 with size: 0.937256 MiB 00:06:26.525 element at address: 0x200000200000 with size: 0.840942 MiB 00:06:26.525 element at address: 0x20001b000000 with size: 0.583191 MiB 00:06:26.525 element at address: 0x200019200000 with size: 0.491150 MiB 00:06:26.525 element at address: 0x200019a00000 with size: 0.485657 MiB 00:06:26.525 element at address: 0x200013800000 with size: 0.470581 MiB 00:06:26.525 element at address: 0x200028400000 with size: 0.411072 MiB 00:06:26.525 element at address: 0x200003a00000 with size: 0.356140 MiB 00:06:26.525 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:06:26.525 list of standard malloc elements. size: 199.220764 MiB 00:06:26.525 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:06:26.525 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:06:26.525 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:26.525 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:26.525 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:26.525 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:26.525 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:06:26.525 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:26.525 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:06:26.525 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:06:26.525 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:26.525 element at address: 0x200003aff980 with size: 0.000244 MiB 00:06:26.525 element at address: 0x200003affa80 with size: 0.000244 MiB 00:06:26.525 element at address: 0x200003eff000 with size: 0.000244 MiB 00:06:26.525 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:06:26.525 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:06:26.525 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:06:26.525 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:06:26.525 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:06:26.525 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:06:26.525 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:06:26.525 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:06:26.525 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:06:26.525 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:06:26.525 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:06:26.525 list of memzone associated elements. size: 602.264404 MiB 00:06:26.525 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:06:26.525 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:26.525 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:06:26.525 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:26.526 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:06:26.526 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1799818_0 00:06:26.526 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:06:26.526 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1799818_0 00:06:26.526 element at address: 0x200003fff340 with size: 48.003113 MiB 00:06:26.526 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1799818_0 00:06:26.526 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:06:26.526 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:26.526 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:06:26.526 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:26.526 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:06:26.526 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1799818 00:06:26.526 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:06:26.526 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1799818 00:06:26.526 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:26.526 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1799818 00:06:26.526 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:26.526 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:26.526 element at address: 0x200019abc780 with size: 1.008179 MiB 00:06:26.526 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:26.526 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:26.526 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:26.526 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:06:26.526 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:26.526 element at address: 0x200003eff100 with size: 1.000549 MiB 00:06:26.526 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1799818 00:06:26.526 element at address: 0x200003affb80 with size: 1.000549 MiB 00:06:26.526 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1799818 00:06:26.526 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:06:26.526 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1799818 00:06:26.526 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:06:26.526 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1799818 00:06:26.526 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:06:26.526 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1799818 00:06:26.526 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:06:26.526 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:26.526 element at address: 0x200013878780 with size: 0.500549 MiB 00:06:26.526 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:26.526 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:06:26.526 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:26.526 element at address: 0x200003adf740 with size: 0.125549 MiB 00:06:26.526 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1799818 00:06:26.526 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:06:26.526 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:26.526 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:06:26.526 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:26.526 element at address: 0x200003adb500 with size: 0.016174 MiB 00:06:26.526 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1799818 00:06:26.526 element at address: 0x20002846f540 with size: 0.002502 MiB 00:06:26.526 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:26.526 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:06:26.526 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1799818 00:06:26.526 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:06:26.526 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1799818 00:06:26.526 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:06:26.526 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:26.526 00:21:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:26.526 00:21:52 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1799818 00:06:26.526 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@947 -- # '[' -z 1799818 ']' 00:06:26.526 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@951 -- # kill -0 1799818 00:06:26.526 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@952 -- # uname 00:06:26.526 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:26.526 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1799818 00:06:26.526 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:26.526 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:26.526 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1799818' 00:06:26.526 killing process with pid 1799818 00:06:26.526 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@966 -- # kill 1799818 00:06:26.526 00:21:52 dpdk_mem_utility -- common/autotest_common.sh@971 -- # wait 1799818 00:06:27.467 00:06:27.467 real 0m1.863s 00:06:27.467 user 0m1.755s 00:06:27.467 sys 0m0.484s 00:06:27.467 00:21:53 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:27.468 00:21:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:27.468 ************************************ 00:06:27.468 END TEST dpdk_mem_utility 00:06:27.468 ************************************ 00:06:27.468 00:21:53 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:06:27.468 00:21:53 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:27.468 00:21:53 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:27.468 00:21:53 -- common/autotest_common.sh@10 -- # set +x 00:06:27.468 ************************************ 00:06:27.468 START TEST event 00:06:27.468 ************************************ 00:06:27.468 00:21:53 event -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:06:27.468 * Looking for test storage... 00:06:27.468 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:06:27.468 00:21:53 event -- event/event.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:27.468 00:21:53 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:27.468 00:21:53 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:27.468 00:21:53 event -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:06:27.468 00:21:53 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:27.468 00:21:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.468 ************************************ 00:06:27.468 START TEST event_perf 00:06:27.468 ************************************ 00:06:27.468 00:21:53 event.event_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:27.468 Running I/O for 1 seconds...[2024-05-15 00:21:53.606877] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:27.468 [2024-05-15 00:21:53.607009] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1800200 ] 00:06:27.732 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.732 [2024-05-15 00:21:53.741266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:27.732 [2024-05-15 00:21:53.839431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.732 [2024-05-15 00:21:53.839541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:27.732 [2024-05-15 00:21:53.839597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:27.732 [2024-05-15 00:21:53.839570] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.117 Running I/O for 1 seconds... 00:06:29.117 lcore 0: 149118 00:06:29.117 lcore 1: 149120 00:06:29.117 lcore 2: 149116 00:06:29.117 lcore 3: 149117 00:06:29.117 done. 00:06:29.117 00:06:29.117 real 0m1.428s 00:06:29.117 user 0m4.267s 00:06:29.117 sys 0m0.147s 00:06:29.117 00:21:54 event.event_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:29.117 00:21:54 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.117 ************************************ 00:06:29.117 END TEST event_perf 00:06:29.117 ************************************ 00:06:29.117 00:21:55 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:29.117 00:21:55 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:29.117 00:21:55 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:29.117 00:21:55 event -- common/autotest_common.sh@10 -- # set +x 00:06:29.117 ************************************ 00:06:29.117 START TEST event_reactor 00:06:29.117 ************************************ 00:06:29.117 00:21:55 event.event_reactor -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:29.117 [2024-05-15 00:21:55.099319] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:29.117 [2024-05-15 00:21:55.099422] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1800514 ] 00:06:29.117 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.117 [2024-05-15 00:21:55.217607] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.377 [2024-05-15 00:21:55.326249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.317 test_start 00:06:30.317 oneshot 00:06:30.317 tick 100 00:06:30.317 tick 100 00:06:30.317 tick 250 00:06:30.317 tick 100 00:06:30.317 tick 100 00:06:30.317 tick 100 00:06:30.317 tick 250 00:06:30.317 tick 500 00:06:30.317 tick 100 00:06:30.317 tick 100 00:06:30.317 tick 250 00:06:30.317 tick 100 00:06:30.317 tick 100 00:06:30.317 test_end 00:06:30.317 00:06:30.317 real 0m1.408s 00:06:30.317 user 0m1.268s 00:06:30.317 sys 0m0.133s 00:06:30.317 00:21:56 event.event_reactor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:30.317 00:21:56 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:30.317 ************************************ 00:06:30.317 END TEST event_reactor 00:06:30.317 ************************************ 00:06:30.577 00:21:56 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:30.577 00:21:56 event -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:06:30.577 00:21:56 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:30.577 00:21:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:30.577 ************************************ 00:06:30.577 START TEST event_reactor_perf 00:06:30.577 ************************************ 00:06:30.577 00:21:56 event.event_reactor_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:30.577 [2024-05-15 00:21:56.579184] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:30.577 [2024-05-15 00:21:56.579317] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1800829 ] 00:06:30.577 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.577 [2024-05-15 00:21:56.710304] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.838 [2024-05-15 00:21:56.806824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.223 test_start 00:06:32.223 test_end 00:06:32.223 Performance: 425431 events per second 00:06:32.223 00:06:32.223 real 0m1.420s 00:06:32.223 user 0m1.271s 00:06:32.223 sys 0m0.142s 00:06:32.223 00:21:57 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:32.223 00:21:57 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.223 ************************************ 00:06:32.223 END TEST event_reactor_perf 00:06:32.223 ************************************ 00:06:32.223 00:21:57 event -- event/event.sh@49 -- # uname -s 00:06:32.223 00:21:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:32.223 00:21:58 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:32.223 00:21:58 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:32.223 00:21:58 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:32.223 00:21:58 event -- common/autotest_common.sh@10 -- # set +x 00:06:32.223 ************************************ 00:06:32.223 START TEST event_scheduler 00:06:32.223 ************************************ 00:06:32.223 00:21:58 event.event_scheduler -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:32.223 * Looking for test storage... 00:06:32.223 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler 00:06:32.223 00:21:58 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:32.223 00:21:58 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1801173 00:06:32.223 00:21:58 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:32.223 00:21:58 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:32.223 00:21:58 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1801173 00:06:32.223 00:21:58 event.event_scheduler -- common/autotest_common.sh@828 -- # '[' -z 1801173 ']' 00:06:32.223 00:21:58 event.event_scheduler -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.223 00:21:58 event.event_scheduler -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:32.223 00:21:58 event.event_scheduler -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.223 00:21:58 event.event_scheduler -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:32.223 00:21:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:32.223 [2024-05-15 00:21:58.188799] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:32.223 [2024-05-15 00:21:58.188920] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1801173 ] 00:06:32.223 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.223 [2024-05-15 00:21:58.312404] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:32.484 [2024-05-15 00:21:58.420054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.484 [2024-05-15 00:21:58.420206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.484 [2024-05-15 00:21:58.420282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.484 [2024-05-15 00:21:58.420294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:32.744 00:21:58 event.event_scheduler -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:32.744 00:21:58 event.event_scheduler -- common/autotest_common.sh@861 -- # return 0 00:06:32.744 00:21:58 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:32.744 00:21:58 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:32.744 00:21:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.004 POWER: Env isn't set yet! 00:06:33.004 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:33.004 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:33.004 POWER: Cannot set governor of lcore 0 to userspace 00:06:33.004 POWER: Attempting to initialise PSTAT power management... 00:06:33.004 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:33.004 POWER: Initialized successfully for lcore 0 power management 00:06:33.004 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:33.004 POWER: Initialized successfully for lcore 1 power management 00:06:33.004 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:33.004 POWER: Initialized successfully for lcore 2 power management 00:06:33.004 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:33.004 POWER: Initialized successfully for lcore 3 power management 00:06:33.004 [2024-05-15 00:21:58.953795] scheduler_dynamic.c: 382:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:33.004 [2024-05-15 00:21:58.953821] scheduler_dynamic.c: 384:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:33.004 [2024-05-15 00:21:58.953838] scheduler_dynamic.c: 386:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:33.004 00:21:58 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.004 00:21:58 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:33.004 00:21:58 event.event_scheduler -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.004 00:21:58 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.004 [2024-05-15 00:21:59.106422] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:33.004 00:21:59 event.event_scheduler -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.004 00:21:59 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:33.004 00:21:59 event.event_scheduler -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:33.004 00:21:59 event.event_scheduler -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:33.004 00:21:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:33.004 ************************************ 00:06:33.004 START TEST scheduler_create_thread 00:06:33.004 ************************************ 00:06:33.004 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1122 -- # scheduler_create_thread 00:06:33.004 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:33.004 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.005 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.005 2 00:06:33.005 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.005 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:33.005 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.005 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.005 3 00:06:33.005 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.005 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:33.005 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.005 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.266 4 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.266 5 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.266 6 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.266 7 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.266 8 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.266 9 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.266 10 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:33.266 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.838 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:33.838 00:06:33.838 real 0m0.593s 00:06:33.838 user 0m0.013s 00:06:33.838 sys 0m0.004s 00:06:33.838 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:33.838 00:21:59 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.838 ************************************ 00:06:33.838 END TEST scheduler_create_thread 00:06:33.838 ************************************ 00:06:33.838 00:21:59 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:33.838 00:21:59 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1801173 00:06:33.838 00:21:59 event.event_scheduler -- common/autotest_common.sh@947 -- # '[' -z 1801173 ']' 00:06:33.838 00:21:59 event.event_scheduler -- common/autotest_common.sh@951 -- # kill -0 1801173 00:06:33.838 00:21:59 event.event_scheduler -- common/autotest_common.sh@952 -- # uname 00:06:33.838 00:21:59 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:33.838 00:21:59 event.event_scheduler -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1801173 00:06:33.838 00:21:59 event.event_scheduler -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:06:33.838 00:21:59 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:06:33.838 00:21:59 event.event_scheduler -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1801173' 00:06:33.838 killing process with pid 1801173 00:06:33.838 00:21:59 event.event_scheduler -- common/autotest_common.sh@966 -- # kill 1801173 00:06:33.838 00:21:59 event.event_scheduler -- common/autotest_common.sh@971 -- # wait 1801173 00:06:34.098 [2024-05-15 00:22:00.214806] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:34.359 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:34.359 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:34.359 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:34.359 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:34.359 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:34.359 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:34.359 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:34.359 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:34.619 00:06:34.619 real 0m2.633s 00:06:34.619 user 0m4.734s 00:06:34.619 sys 0m0.449s 00:06:34.619 00:22:00 event.event_scheduler -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:34.619 00:22:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:34.619 ************************************ 00:06:34.619 END TEST event_scheduler 00:06:34.619 ************************************ 00:06:34.619 00:22:00 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:34.619 00:22:00 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:34.619 00:22:00 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:34.619 00:22:00 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:34.619 00:22:00 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.619 ************************************ 00:06:34.619 START TEST app_repeat 00:06:34.619 ************************************ 00:06:34.619 00:22:00 event.app_repeat -- common/autotest_common.sh@1122 -- # app_repeat_test 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1801820 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1801820' 00:06:34.619 Process app_repeat pid: 1801820 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:34.619 spdk_app_start Round 0 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1801820 /var/tmp/spdk-nbd.sock 00:06:34.619 00:22:00 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 1801820 ']' 00:06:34.619 00:22:00 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:34.619 00:22:00 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:34.619 00:22:00 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:34.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:34.619 00:22:00 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:34.619 00:22:00 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:34.619 00:22:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.880 [2024-05-15 00:22:00.802390] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:34.880 [2024-05-15 00:22:00.802530] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1801820 ] 00:06:34.880 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.880 [2024-05-15 00:22:00.939895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.880 [2024-05-15 00:22:01.041761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.880 [2024-05-15 00:22:01.041781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.451 00:22:01 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:35.451 00:22:01 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:06:35.451 00:22:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.712 Malloc0 00:06:35.712 00:22:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:35.712 Malloc1 00:06:35.712 00:22:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.712 00:22:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:35.973 /dev/nbd0 00:06:35.973 00:22:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.973 00:22:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.973 00:22:01 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:06:35.973 00:22:01 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:06:35.973 00:22:01 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:06:35.973 00:22:01 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:06:35.973 00:22:01 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:06:35.973 00:22:01 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:06:35.973 00:22:01 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:06:35.973 00:22:02 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:06:35.973 00:22:02 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:35.973 1+0 records in 00:06:35.973 1+0 records out 00:06:35.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00030626 s, 13.4 MB/s 00:06:35.973 00:22:02 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:35.973 00:22:02 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:06:35.973 00:22:02 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:35.973 00:22:02 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:06:35.973 00:22:02 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:06:35.973 00:22:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.973 00:22:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:35.973 00:22:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:36.233 /dev/nbd1 00:06:36.233 00:22:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.234 00:22:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:36.234 1+0 records in 00:06:36.234 1+0 records out 00:06:36.234 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225551 s, 18.2 MB/s 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:06:36.234 00:22:02 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:06:36.234 00:22:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.234 00:22:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:36.234 00:22:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.234 00:22:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.234 00:22:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.234 00:22:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.234 { 00:06:36.234 "nbd_device": "/dev/nbd0", 00:06:36.234 "bdev_name": "Malloc0" 00:06:36.234 }, 00:06:36.234 { 00:06:36.234 "nbd_device": "/dev/nbd1", 00:06:36.234 "bdev_name": "Malloc1" 00:06:36.234 } 00:06:36.234 ]' 00:06:36.234 00:22:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.234 { 00:06:36.234 "nbd_device": "/dev/nbd0", 00:06:36.234 "bdev_name": "Malloc0" 00:06:36.234 }, 00:06:36.234 { 00:06:36.234 "nbd_device": "/dev/nbd1", 00:06:36.234 "bdev_name": "Malloc1" 00:06:36.234 } 00:06:36.234 ]' 00:06:36.234 00:22:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.495 /dev/nbd1' 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.495 /dev/nbd1' 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:36.495 256+0 records in 00:06:36.495 256+0 records out 00:06:36.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00516107 s, 203 MB/s 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.495 256+0 records in 00:06:36.495 256+0 records out 00:06:36.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144356 s, 72.6 MB/s 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:36.495 256+0 records in 00:06:36.495 256+0 records out 00:06:36.495 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168915 s, 62.1 MB/s 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.495 00:22:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.756 00:22:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.016 00:22:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:37.016 00:22:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:37.016 00:22:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.016 00:22:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:37.016 00:22:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:37.016 00:22:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.016 00:22:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:37.016 00:22:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:37.016 00:22:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:37.016 00:22:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:37.016 00:22:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:37.016 00:22:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:37.016 00:22:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:37.276 00:22:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:37.848 [2024-05-15 00:22:03.750429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:37.848 [2024-05-15 00:22:03.838065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.848 [2024-05-15 00:22:03.838066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.848 [2024-05-15 00:22:03.909361] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:37.848 [2024-05-15 00:22:03.909417] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:40.393 00:22:06 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:40.393 00:22:06 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:40.393 spdk_app_start Round 1 00:06:40.393 00:22:06 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1801820 /var/tmp/spdk-nbd.sock 00:06:40.393 00:22:06 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 1801820 ']' 00:06:40.393 00:22:06 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.393 00:22:06 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:40.393 00:22:06 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.393 00:22:06 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:40.393 00:22:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:40.393 00:22:06 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:40.393 00:22:06 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:06:40.393 00:22:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.653 Malloc0 00:06:40.653 00:22:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:40.653 Malloc1 00:06:40.653 00:22:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.653 00:22:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.653 00:22:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.653 00:22:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:40.653 00:22:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.653 00:22:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:40.653 00:22:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:40.653 00:22:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.653 00:22:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:40.653 00:22:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:40.653 00:22:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.653 00:22:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:40.653 00:22:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:40.653 00:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:40.654 00:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.654 00:22:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:40.914 /dev/nbd0 00:06:40.914 00:22:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:40.914 00:22:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:40.914 00:22:06 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:06:40.914 00:22:06 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:06:40.914 00:22:06 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:06:40.914 00:22:06 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:06:40.914 00:22:06 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:06:40.914 00:22:06 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:06:40.914 00:22:06 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:06:40.914 00:22:06 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:06:40.914 00:22:06 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:40.914 1+0 records in 00:06:40.914 1+0 records out 00:06:40.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000210431 s, 19.5 MB/s 00:06:40.914 00:22:06 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:40.914 00:22:06 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:06:40.915 00:22:06 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:40.915 00:22:06 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:06:40.915 00:22:06 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:06:40.915 00:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:40.915 00:22:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:40.915 00:22:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:41.175 /dev/nbd1 00:06:41.175 00:22:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:41.175 00:22:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:41.175 1+0 records in 00:06:41.175 1+0 records out 00:06:41.175 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00024567 s, 16.7 MB/s 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:06:41.175 00:22:07 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:06:41.175 00:22:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:41.175 00:22:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:41.175 00:22:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.175 00:22:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.175 00:22:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.175 00:22:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:41.175 { 00:06:41.175 "nbd_device": "/dev/nbd0", 00:06:41.175 "bdev_name": "Malloc0" 00:06:41.175 }, 00:06:41.175 { 00:06:41.175 "nbd_device": "/dev/nbd1", 00:06:41.175 "bdev_name": "Malloc1" 00:06:41.175 } 00:06:41.175 ]' 00:06:41.175 00:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:41.175 { 00:06:41.175 "nbd_device": "/dev/nbd0", 00:06:41.175 "bdev_name": "Malloc0" 00:06:41.175 }, 00:06:41.175 { 00:06:41.175 "nbd_device": "/dev/nbd1", 00:06:41.175 "bdev_name": "Malloc1" 00:06:41.175 } 00:06:41.175 ]' 00:06:41.175 00:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.436 00:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:41.436 /dev/nbd1' 00:06:41.436 00:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:41.436 /dev/nbd1' 00:06:41.436 00:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.436 00:22:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:41.436 00:22:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:41.436 00:22:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:41.436 00:22:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:41.436 00:22:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:41.437 256+0 records in 00:06:41.437 256+0 records out 00:06:41.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00451758 s, 232 MB/s 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:41.437 256+0 records in 00:06:41.437 256+0 records out 00:06:41.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01483 s, 70.7 MB/s 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:41.437 256+0 records in 00:06:41.437 256+0 records out 00:06:41.437 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162246 s, 64.6 MB/s 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.437 00:22:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.698 00:22:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.959 00:22:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.959 00:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.959 00:22:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.959 00:22:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.959 00:22:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.959 00:22:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.959 00:22:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.959 00:22:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.959 00:22:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.959 00:22:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.959 00:22:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.959 00:22:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.959 00:22:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.222 00:22:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.793 [2024-05-15 00:22:08.701810] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.794 [2024-05-15 00:22:08.790602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.794 [2024-05-15 00:22:08.790619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.794 [2024-05-15 00:22:08.863591] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.794 [2024-05-15 00:22:08.863629] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.338 00:22:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.338 00:22:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:45.338 spdk_app_start Round 2 00:06:45.338 00:22:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1801820 /var/tmp/spdk-nbd.sock 00:06:45.338 00:22:11 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 1801820 ']' 00:06:45.338 00:22:11 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.338 00:22:11 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:45.338 00:22:11 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.338 00:22:11 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:45.338 00:22:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.338 00:22:11 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:45.338 00:22:11 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:06:45.338 00:22:11 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.598 Malloc0 00:06:45.598 00:22:11 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:45.598 Malloc1 00:06:45.598 00:22:11 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.598 00:22:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:45.888 /dev/nbd0 00:06:45.888 00:22:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.888 00:22:11 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.888 00:22:11 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd0 00:06:45.888 00:22:11 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:06:45.888 00:22:11 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:06:45.888 00:22:11 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:06:45.888 00:22:11 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd0 /proc/partitions 00:06:45.888 00:22:11 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:06:45.888 00:22:11 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:06:45.889 00:22:11 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:06:45.889 00:22:11 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:45.889 1+0 records in 00:06:45.889 1+0 records out 00:06:45.889 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021983 s, 18.6 MB/s 00:06:45.889 00:22:11 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:45.889 00:22:11 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:06:45.889 00:22:11 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:45.889 00:22:11 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:06:45.889 00:22:11 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:06:45.889 00:22:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.889 00:22:11 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:45.889 00:22:11 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:46.171 /dev/nbd1 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@865 -- # local nbd_name=nbd1 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@866 -- # local i 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@868 -- # (( i = 1 )) 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@868 -- # (( i <= 20 )) 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@869 -- # grep -q -w nbd1 /proc/partitions 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@870 -- # break 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@881 -- # (( i = 1 )) 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@881 -- # (( i <= 20 )) 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@882 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:46.171 1+0 records in 00:06:46.171 1+0 records out 00:06:46.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230531 s, 17.8 MB/s 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@883 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@883 -- # size=4096 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@884 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@885 -- # '[' 4096 '!=' 0 ']' 00:06:46.171 00:22:12 event.app_repeat -- common/autotest_common.sh@886 -- # return 0 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.171 { 00:06:46.171 "nbd_device": "/dev/nbd0", 00:06:46.171 "bdev_name": "Malloc0" 00:06:46.171 }, 00:06:46.171 { 00:06:46.171 "nbd_device": "/dev/nbd1", 00:06:46.171 "bdev_name": "Malloc1" 00:06:46.171 } 00:06:46.171 ]' 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.171 { 00:06:46.171 "nbd_device": "/dev/nbd0", 00:06:46.171 "bdev_name": "Malloc0" 00:06:46.171 }, 00:06:46.171 { 00:06:46.171 "nbd_device": "/dev/nbd1", 00:06:46.171 "bdev_name": "Malloc1" 00:06:46.171 } 00:06:46.171 ]' 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.171 /dev/nbd1' 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.171 /dev/nbd1' 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:46.171 256+0 records in 00:06:46.171 256+0 records out 00:06:46.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00548043 s, 191 MB/s 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.171 256+0 records in 00:06:46.171 256+0 records out 00:06:46.171 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014581 s, 71.9 MB/s 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.171 00:22:12 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:46.431 256+0 records in 00:06:46.431 256+0 records out 00:06:46.431 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163006 s, 64.3 MB/s 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:46.431 00:22:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.432 00:22:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:46.432 00:22:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:46.432 00:22:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:46.432 00:22:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:46.432 00:22:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.432 00:22:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.432 00:22:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:46.432 00:22:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.432 00:22:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.432 00:22:12 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:46.432 00:22:12 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:46.692 00:22:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:46.692 00:22:12 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:46.692 00:22:12 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:46.692 00:22:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:46.692 00:22:12 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:46.692 00:22:12 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:46.692 00:22:12 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:46.692 00:22:12 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:46.692 00:22:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.692 00:22:12 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.692 00:22:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.953 00:22:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:46.953 00:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:46.953 00:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.953 00:22:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:46.953 00:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:46.953 00:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.953 00:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:46.953 00:22:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:46.953 00:22:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:46.953 00:22:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:46.953 00:22:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:46.953 00:22:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:46.953 00:22:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:47.214 00:22:13 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:47.474 [2024-05-15 00:22:13.632230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.734 [2024-05-15 00:22:13.718396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.734 [2024-05-15 00:22:13.718396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.734 [2024-05-15 00:22:13.789620] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:47.734 [2024-05-15 00:22:13.789674] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:50.276 00:22:16 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1801820 /var/tmp/spdk-nbd.sock 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@828 -- # '[' -z 1801820 ']' 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:50.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@861 -- # return 0 00:06:50.276 00:22:16 event.app_repeat -- event/event.sh@39 -- # killprocess 1801820 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@947 -- # '[' -z 1801820 ']' 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@951 -- # kill -0 1801820 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@952 -- # uname 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1801820 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1801820' 00:06:50.276 killing process with pid 1801820 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@966 -- # kill 1801820 00:06:50.276 00:22:16 event.app_repeat -- common/autotest_common.sh@971 -- # wait 1801820 00:06:50.846 spdk_app_start is called in Round 0. 00:06:50.846 Shutdown signal received, stop current app iteration 00:06:50.846 Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 reinitialization... 00:06:50.846 spdk_app_start is called in Round 1. 00:06:50.846 Shutdown signal received, stop current app iteration 00:06:50.846 Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 reinitialization... 00:06:50.846 spdk_app_start is called in Round 2. 00:06:50.846 Shutdown signal received, stop current app iteration 00:06:50.846 Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 reinitialization... 00:06:50.846 spdk_app_start is called in Round 3. 00:06:50.846 Shutdown signal received, stop current app iteration 00:06:50.846 00:22:16 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:50.846 00:22:16 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:50.846 00:06:50.846 real 0m16.045s 00:06:50.846 user 0m33.250s 00:06:50.846 sys 0m2.202s 00:06:50.846 00:22:16 event.app_repeat -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:50.846 00:22:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:50.846 ************************************ 00:06:50.846 END TEST app_repeat 00:06:50.846 ************************************ 00:06:50.846 00:22:16 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:50.846 00:22:16 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:50.846 00:22:16 event -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:50.846 00:22:16 event -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:50.846 00:22:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:50.846 ************************************ 00:06:50.846 START TEST cpu_locks 00:06:50.846 ************************************ 00:06:50.846 00:22:16 event.cpu_locks -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:50.846 * Looking for test storage... 00:06:50.846 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:06:50.846 00:22:16 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:50.846 00:22:16 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:50.846 00:22:16 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:50.846 00:22:16 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:50.846 00:22:16 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:50.846 00:22:16 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:50.846 00:22:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.846 ************************************ 00:06:50.846 START TEST default_locks 00:06:50.846 ************************************ 00:06:50.846 00:22:16 event.cpu_locks.default_locks -- common/autotest_common.sh@1122 -- # default_locks 00:06:50.846 00:22:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1805064 00:06:50.846 00:22:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1805064 00:06:50.846 00:22:16 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 1805064 ']' 00:06:50.846 00:22:16 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.846 00:22:16 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:50.846 00:22:16 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.846 00:22:16 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:50.846 00:22:16 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:50.846 00:22:16 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.106 [2024-05-15 00:22:17.067509] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:51.106 [2024-05-15 00:22:17.067654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1805064 ] 00:06:51.106 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.106 [2024-05-15 00:22:17.200732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.366 [2024-05-15 00:22:17.301548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.626 00:22:17 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:51.626 00:22:17 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 0 00:06:51.626 00:22:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1805064 00:06:51.626 00:22:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1805064 00:06:51.626 00:22:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:51.885 lslocks: write error 00:06:51.885 00:22:17 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1805064 00:06:51.885 00:22:17 event.cpu_locks.default_locks -- common/autotest_common.sh@947 -- # '[' -z 1805064 ']' 00:06:51.885 00:22:17 event.cpu_locks.default_locks -- common/autotest_common.sh@951 -- # kill -0 1805064 00:06:51.885 00:22:17 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # uname 00:06:51.885 00:22:17 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:51.885 00:22:17 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1805064 00:06:51.885 00:22:17 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:51.885 00:22:17 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:51.885 00:22:17 event.cpu_locks.default_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1805064' 00:06:51.885 killing process with pid 1805064 00:06:51.885 00:22:17 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # kill 1805064 00:06:51.885 00:22:17 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # wait 1805064 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1805064 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@649 -- # local es=0 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1805064 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # waitforlisten 1805064 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@828 -- # '[' -z 1805064 ']' 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.825 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (1805064) - No such process 00:06:52.825 ERROR: process (pid: 1805064) is no longer running 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@861 -- # return 1 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # es=1 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:52.825 00:06:52.825 real 0m1.867s 00:06:52.825 user 0m1.780s 00:06:52.825 sys 0m0.555s 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:52.825 00:22:18 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.825 ************************************ 00:06:52.825 END TEST default_locks 00:06:52.825 ************************************ 00:06:52.825 00:22:18 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:52.825 00:22:18 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:52.825 00:22:18 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:52.825 00:22:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:52.825 ************************************ 00:06:52.825 START TEST default_locks_via_rpc 00:06:52.825 ************************************ 00:06:52.825 00:22:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1122 -- # default_locks_via_rpc 00:06:52.825 00:22:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1805601 00:06:52.825 00:22:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1805601 00:06:52.825 00:22:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1805601 ']' 00:06:52.825 00:22:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.825 00:22:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:52.825 00:22:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.825 00:22:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:52.825 00:22:18 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:52.826 00:22:18 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:53.085 [2024-05-15 00:22:18.992912] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:53.085 [2024-05-15 00:22:18.993021] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1805601 ] 00:06:53.085 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.085 [2024-05-15 00:22:19.085347] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.085 [2024-05-15 00:22:19.182970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1805601 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1805601 00:06:53.651 00:22:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.911 00:22:19 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1805601 00:06:53.911 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@947 -- # '[' -z 1805601 ']' 00:06:53.911 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@951 -- # kill -0 1805601 00:06:53.911 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # uname 00:06:53.911 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:53.911 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1805601 00:06:53.911 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:53.911 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:53.911 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1805601' 00:06:53.911 killing process with pid 1805601 00:06:53.911 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # kill 1805601 00:06:53.911 00:22:19 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # wait 1805601 00:06:54.848 00:06:54.848 real 0m1.861s 00:06:54.848 user 0m1.812s 00:06:54.848 sys 0m0.497s 00:06:54.848 00:22:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:54.848 00:22:20 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:54.848 ************************************ 00:06:54.848 END TEST default_locks_via_rpc 00:06:54.848 ************************************ 00:06:54.848 00:22:20 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:54.848 00:22:20 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:54.848 00:22:20 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:54.848 00:22:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.848 ************************************ 00:06:54.848 START TEST non_locking_app_on_locked_coremask 00:06:54.848 ************************************ 00:06:54.848 00:22:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # non_locking_app_on_locked_coremask 00:06:54.848 00:22:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1806002 00:06:54.848 00:22:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1806002 /var/tmp/spdk.sock 00:06:54.848 00:22:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1806002 ']' 00:06:54.848 00:22:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.848 00:22:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:54.848 00:22:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.848 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.848 00:22:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:54.848 00:22:20 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.848 00:22:20 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:54.848 [2024-05-15 00:22:20.884929] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:54.848 [2024-05-15 00:22:20.884997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806002 ] 00:06:54.848 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.848 [2024-05-15 00:22:20.968447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.108 [2024-05-15 00:22:21.060165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.674 00:22:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:55.674 00:22:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:55.674 00:22:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1806019 00:06:55.674 00:22:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1806019 /var/tmp/spdk2.sock 00:06:55.674 00:22:21 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:55.674 00:22:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1806019 ']' 00:06:55.674 00:22:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:55.674 00:22:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:55.674 00:22:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:55.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:55.674 00:22:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:55.674 00:22:21 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.674 [2024-05-15 00:22:21.676754] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:55.674 [2024-05-15 00:22:21.676867] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806019 ] 00:06:55.674 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.674 [2024-05-15 00:22:21.829594] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:55.674 [2024-05-15 00:22:21.829636] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.933 [2024-05-15 00:22:22.015263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.871 00:22:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:06:56.871 00:22:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:06:56.871 00:22:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1806002 00:06:56.871 00:22:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1806002 00:06:56.871 00:22:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.871 lslocks: write error 00:06:56.871 00:22:22 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1806002 00:06:56.871 00:22:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1806002 ']' 00:06:56.871 00:22:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 1806002 00:06:56.871 00:22:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:56.871 00:22:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:56.871 00:22:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1806002 00:06:56.871 00:22:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:56.871 00:22:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:56.871 00:22:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1806002' 00:06:56.871 killing process with pid 1806002 00:06:56.871 00:22:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 1806002 00:06:56.871 00:22:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 1806002 00:06:58.776 00:22:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1806019 00:06:58.776 00:22:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1806019 ']' 00:06:58.776 00:22:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 1806019 00:06:58.776 00:22:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:06:58.776 00:22:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:06:58.776 00:22:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1806019 00:06:58.776 00:22:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:06:58.776 00:22:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:06:58.776 00:22:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1806019' 00:06:58.776 killing process with pid 1806019 00:06:58.776 00:22:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 1806019 00:06:58.776 00:22:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 1806019 00:06:59.715 00:06:59.715 real 0m4.688s 00:06:59.715 user 0m4.771s 00:06:59.715 sys 0m0.919s 00:06:59.715 00:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:06:59.715 00:22:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.715 ************************************ 00:06:59.715 END TEST non_locking_app_on_locked_coremask 00:06:59.715 ************************************ 00:06:59.715 00:22:25 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:59.715 00:22:25 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:06:59.715 00:22:25 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:06:59.715 00:22:25 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.715 ************************************ 00:06:59.715 START TEST locking_app_on_unlocked_coremask 00:06:59.715 ************************************ 00:06:59.715 00:22:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_unlocked_coremask 00:06:59.715 00:22:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1806940 00:06:59.715 00:22:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1806940 /var/tmp/spdk.sock 00:06:59.715 00:22:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1806940 ']' 00:06:59.715 00:22:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.715 00:22:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:06:59.715 00:22:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.715 00:22:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:06:59.715 00:22:25 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.715 00:22:25 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:59.715 [2024-05-15 00:22:25.686526] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:06:59.715 [2024-05-15 00:22:25.686665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1806940 ] 00:06:59.715 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.715 [2024-05-15 00:22:25.818827] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.715 [2024-05-15 00:22:25.818874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.973 [2024-05-15 00:22:25.919329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.543 00:22:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:00.543 00:22:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:07:00.543 00:22:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1807057 00:07:00.543 00:22:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1807057 /var/tmp/spdk2.sock 00:07:00.543 00:22:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1807057 ']' 00:07:00.543 00:22:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.543 00:22:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:00.543 00:22:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:00.543 00:22:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.543 00:22:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:00.543 00:22:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.543 [2024-05-15 00:22:26.505458] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:00.543 [2024-05-15 00:22:26.505605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1807057 ] 00:07:00.543 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.543 [2024-05-15 00:22:26.680324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.801 [2024-05-15 00:22:26.870331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@861 -- # return 0 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1807057 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1807057 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:01.740 lslocks: write error 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1806940 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1806940 ']' 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 1806940 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1806940 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1806940' 00:07:01.740 killing process with pid 1806940 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 1806940 00:07:01.740 00:22:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 1806940 00:07:03.647 00:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1807057 00:07:03.647 00:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1807057 ']' 00:07:03.647 00:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@951 -- # kill -0 1807057 00:07:03.647 00:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # uname 00:07:03.647 00:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:03.647 00:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1807057 00:07:03.647 00:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:03.647 00:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:03.647 00:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1807057' 00:07:03.647 killing process with pid 1807057 00:07:03.647 00:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # kill 1807057 00:07:03.647 00:22:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # wait 1807057 00:07:04.585 00:07:04.585 real 0m4.867s 00:07:04.585 user 0m4.914s 00:07:04.585 sys 0m1.059s 00:07:04.585 00:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:04.585 00:22:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.585 ************************************ 00:07:04.585 END TEST locking_app_on_unlocked_coremask 00:07:04.585 ************************************ 00:07:04.585 00:22:30 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:04.585 00:22:30 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:07:04.585 00:22:30 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:04.585 00:22:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.585 ************************************ 00:07:04.585 START TEST locking_app_on_locked_coremask 00:07:04.585 ************************************ 00:07:04.585 00:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1122 -- # locking_app_on_locked_coremask 00:07:04.585 00:22:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1807878 00:07:04.585 00:22:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1807878 /var/tmp/spdk.sock 00:07:04.585 00:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1807878 ']' 00:07:04.585 00:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.585 00:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:04.585 00:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.585 00:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:04.585 00:22:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:04.586 00:22:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.586 [2024-05-15 00:22:30.613932] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:04.586 [2024-05-15 00:22:30.614041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1807878 ] 00:07:04.586 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.586 [2024-05-15 00:22:30.727991] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.844 [2024-05-15 00:22:30.827963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.412 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:05.412 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 0 00:07:05.412 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1808174 00:07:05.412 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1808174 /var/tmp/spdk2.sock 00:07:05.412 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:05.412 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1808174 /var/tmp/spdk2.sock 00:07:05.412 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:05.412 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:05.412 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:05.412 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:05.412 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:05.412 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1808174 /var/tmp/spdk2.sock 00:07:05.413 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@828 -- # '[' -z 1808174 ']' 00:07:05.413 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.413 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:05.413 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.413 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:05.413 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:05.413 [2024-05-15 00:22:31.383411] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:05.413 [2024-05-15 00:22:31.383520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808174 ] 00:07:05.413 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.413 [2024-05-15 00:22:31.531756] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1807878 has claimed it. 00:07:05.413 [2024-05-15 00:22:31.531802] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:05.984 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (1808174) - No such process 00:07:05.984 ERROR: process (pid: 1808174) is no longer running 00:07:05.984 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:05.984 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@861 -- # return 1 00:07:05.984 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # es=1 00:07:05.984 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:05.984 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:05.984 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:05.984 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1807878 00:07:05.984 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1807878 00:07:05.984 00:22:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.984 lslocks: write error 00:07:05.984 00:22:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1807878 00:07:05.984 00:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@947 -- # '[' -z 1807878 ']' 00:07:05.984 00:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@951 -- # kill -0 1807878 00:07:05.984 00:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # uname 00:07:05.984 00:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:05.984 00:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1807878 00:07:06.244 00:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:06.244 00:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:06.244 00:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1807878' 00:07:06.244 killing process with pid 1807878 00:07:06.244 00:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # kill 1807878 00:07:06.244 00:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # wait 1807878 00:07:07.180 00:07:07.180 real 0m2.462s 00:07:07.180 user 0m2.520s 00:07:07.180 sys 0m0.670s 00:07:07.180 00:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:07.180 00:22:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.180 ************************************ 00:07:07.180 END TEST locking_app_on_locked_coremask 00:07:07.180 ************************************ 00:07:07.180 00:22:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:07.180 00:22:33 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:07:07.180 00:22:33 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:07.180 00:22:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.180 ************************************ 00:07:07.180 START TEST locking_overlapped_coremask 00:07:07.180 ************************************ 00:07:07.180 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask 00:07:07.180 00:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1808511 00:07:07.180 00:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1808511 /var/tmp/spdk.sock 00:07:07.180 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 1808511 ']' 00:07:07.180 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.180 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:07.180 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.180 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:07.180 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.180 00:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:07.180 [2024-05-15 00:22:33.159613] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:07.180 [2024-05-15 00:22:33.159742] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808511 ] 00:07:07.180 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.180 [2024-05-15 00:22:33.280837] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.447 [2024-05-15 00:22:33.373539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.447 [2024-05-15 00:22:33.373622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.447 [2024-05-15 00:22:33.373627] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 0 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1808535 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1808535 /var/tmp/spdk2.sock 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@649 -- # local es=0 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # valid_exec_arg waitforlisten 1808535 /var/tmp/spdk2.sock 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@637 -- # local arg=waitforlisten 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # type -t waitforlisten 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # waitforlisten 1808535 /var/tmp/spdk2.sock 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@828 -- # '[' -z 1808535 ']' 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:07.712 00:22:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.971 [2024-05-15 00:22:33.933846] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:07.971 [2024-05-15 00:22:33.933979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808535 ] 00:07:07.971 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.971 [2024-05-15 00:22:34.104726] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1808511 has claimed it. 00:07:07.971 [2024-05-15 00:22:34.104780] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:08.539 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 843: kill: (1808535) - No such process 00:07:08.539 ERROR: process (pid: 1808535) is no longer running 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@861 -- # return 1 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # es=1 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1808511 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@947 -- # '[' -z 1808511 ']' 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@951 -- # kill -0 1808511 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # uname 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1808511 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1808511' 00:07:08.539 killing process with pid 1808511 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # kill 1808511 00:07:08.539 00:22:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # wait 1808511 00:07:09.476 00:07:09.476 real 0m2.343s 00:07:09.476 user 0m6.053s 00:07:09.476 sys 0m0.587s 00:07:09.476 00:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:09.476 00:22:35 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.476 ************************************ 00:07:09.476 END TEST locking_overlapped_coremask 00:07:09.476 ************************************ 00:07:09.476 00:22:35 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:09.476 00:22:35 event.cpu_locks -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:07:09.476 00:22:35 event.cpu_locks -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:09.476 00:22:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.476 ************************************ 00:07:09.476 START TEST locking_overlapped_coremask_via_rpc 00:07:09.476 ************************************ 00:07:09.476 00:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1122 -- # locking_overlapped_coremask_via_rpc 00:07:09.476 00:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1808880 00:07:09.476 00:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1808880 /var/tmp/spdk.sock 00:07:09.476 00:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1808880 ']' 00:07:09.476 00:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.476 00:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:09.476 00:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:09.476 00:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.476 00:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:09.476 00:22:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.476 [2024-05-15 00:22:35.559649] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:09.476 [2024-05-15 00:22:35.559769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1808880 ] 00:07:09.476 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.734 [2024-05-15 00:22:35.675394] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.734 [2024-05-15 00:22:35.675428] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:09.734 [2024-05-15 00:22:35.770039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.734 [2024-05-15 00:22:35.770051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.734 [2024-05-15 00:22:35.770059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.302 00:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:10.302 00:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:07:10.302 00:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1809158 00:07:10.302 00:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1809158 /var/tmp/spdk2.sock 00:07:10.302 00:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1809158 ']' 00:07:10.302 00:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.302 00:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:10.302 00:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.302 00:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:10.302 00:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.302 00:22:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:10.302 [2024-05-15 00:22:36.365388] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:10.302 [2024-05-15 00:22:36.365526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1809158 ] 00:07:10.302 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.562 [2024-05-15 00:22:36.535777] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.562 [2024-05-15 00:22:36.535820] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.821 [2024-05-15 00:22:36.735736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.821 [2024-05-15 00:22:36.735866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.821 [2024-05-15 00:22:36.735897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@649 -- # local es=0 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.454 [2024-05-15 00:22:37.500680] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1808880 has claimed it. 00:07:11.454 request: 00:07:11.454 { 00:07:11.454 "method": "framework_enable_cpumask_locks", 00:07:11.454 "req_id": 1 00:07:11.454 } 00:07:11.454 Got JSON-RPC error response 00:07:11.454 response: 00:07:11.454 { 00:07:11.454 "code": -32603, 00:07:11.454 "message": "Failed to claim CPU core: 2" 00:07:11.454 } 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # es=1 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1808880 /var/tmp/spdk.sock 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1808880 ']' 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.454 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:11.454 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1809158 /var/tmp/spdk2.sock 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@828 -- # '[' -z 1809158 ']' 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@861 -- # return 0 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:11.716 00:07:11.716 real 0m2.364s 00:07:11.716 user 0m0.732s 00:07:11.716 sys 0m0.153s 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:11.716 00:22:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.716 ************************************ 00:07:11.716 END TEST locking_overlapped_coremask_via_rpc 00:07:11.716 ************************************ 00:07:11.716 00:22:37 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:11.716 00:22:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1808880 ]] 00:07:11.716 00:22:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1808880 00:07:11.716 00:22:37 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 1808880 ']' 00:07:11.716 00:22:37 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 1808880 00:07:11.716 00:22:37 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:07:11.716 00:22:37 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:11.716 00:22:37 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1808880 00:07:11.975 00:22:37 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:11.975 00:22:37 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:11.975 00:22:37 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1808880' 00:07:11.975 killing process with pid 1808880 00:07:11.976 00:22:37 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 1808880 00:07:11.976 00:22:37 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 1808880 00:07:12.914 00:22:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1809158 ]] 00:07:12.914 00:22:38 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1809158 00:07:12.914 00:22:38 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 1809158 ']' 00:07:12.914 00:22:38 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 1809158 00:07:12.914 00:22:38 event.cpu_locks -- common/autotest_common.sh@952 -- # uname 00:07:12.914 00:22:38 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:12.914 00:22:38 event.cpu_locks -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1809158 00:07:12.914 00:22:38 event.cpu_locks -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:07:12.914 00:22:38 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:07:12.914 00:22:38 event.cpu_locks -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1809158' 00:07:12.914 killing process with pid 1809158 00:07:12.914 00:22:38 event.cpu_locks -- common/autotest_common.sh@966 -- # kill 1809158 00:07:12.914 00:22:38 event.cpu_locks -- common/autotest_common.sh@971 -- # wait 1809158 00:07:13.483 00:22:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:13.483 00:22:39 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:13.483 00:22:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1808880 ]] 00:07:13.483 00:22:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1808880 00:07:13.483 00:22:39 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 1808880 ']' 00:07:13.483 00:22:39 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 1808880 00:07:13.483 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (1808880) - No such process 00:07:13.483 00:22:39 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 1808880 is not found' 00:07:13.483 Process with pid 1808880 is not found 00:07:13.483 00:22:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1809158 ]] 00:07:13.483 00:22:39 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1809158 00:07:13.483 00:22:39 event.cpu_locks -- common/autotest_common.sh@947 -- # '[' -z 1809158 ']' 00:07:13.483 00:22:39 event.cpu_locks -- common/autotest_common.sh@951 -- # kill -0 1809158 00:07:13.483 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (1809158) - No such process 00:07:13.483 00:22:39 event.cpu_locks -- common/autotest_common.sh@974 -- # echo 'Process with pid 1809158 is not found' 00:07:13.483 Process with pid 1809158 is not found 00:07:13.483 00:22:39 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:13.483 00:07:13.483 real 0m22.788s 00:07:13.483 user 0m37.690s 00:07:13.483 sys 0m5.546s 00:07:13.483 00:22:39 event.cpu_locks -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:13.483 00:22:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:13.483 ************************************ 00:07:13.483 END TEST cpu_locks 00:07:13.483 ************************************ 00:07:13.743 00:07:13.743 real 0m46.219s 00:07:13.743 user 1m22.637s 00:07:13.743 sys 0m8.977s 00:07:13.743 00:22:39 event -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:13.743 00:22:39 event -- common/autotest_common.sh@10 -- # set +x 00:07:13.743 ************************************ 00:07:13.743 END TEST event 00:07:13.743 ************************************ 00:07:13.743 00:22:39 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:07:13.743 00:22:39 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:07:13.743 00:22:39 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:13.743 00:22:39 -- common/autotest_common.sh@10 -- # set +x 00:07:13.743 ************************************ 00:07:13.743 START TEST thread 00:07:13.743 ************************************ 00:07:13.743 00:22:39 thread -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:07:13.743 * Looking for test storage... 00:07:13.743 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread 00:07:13.743 00:22:39 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:13.743 00:22:39 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:07:13.743 00:22:39 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:13.743 00:22:39 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.743 ************************************ 00:07:13.743 START TEST thread_poller_perf 00:07:13.743 ************************************ 00:07:13.743 00:22:39 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:13.743 [2024-05-15 00:22:39.890018] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:13.743 [2024-05-15 00:22:39.890129] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1809862 ] 00:07:14.004 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.004 [2024-05-15 00:22:40.011868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.004 [2024-05-15 00:22:40.125268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.004 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:15.381 ====================================== 00:07:15.381 busy:1904935410 (cyc) 00:07:15.381 total_run_count: 402000 00:07:15.381 tsc_hz: 1900000000 (cyc) 00:07:15.381 ====================================== 00:07:15.381 poller_cost: 4738 (cyc), 2493 (nsec) 00:07:15.381 00:07:15.381 real 0m1.422s 00:07:15.381 user 0m1.290s 00:07:15.381 sys 0m0.125s 00:07:15.381 00:22:41 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:15.381 00:22:41 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.381 ************************************ 00:07:15.381 END TEST thread_poller_perf 00:07:15.381 ************************************ 00:07:15.381 00:22:41 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:15.381 00:22:41 thread -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:07:15.381 00:22:41 thread -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:15.381 00:22:41 thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.381 ************************************ 00:07:15.381 START TEST thread_poller_perf 00:07:15.381 ************************************ 00:07:15.381 00:22:41 thread.thread_poller_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:15.381 [2024-05-15 00:22:41.388417] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:15.381 [2024-05-15 00:22:41.388557] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810181 ] 00:07:15.381 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.381 [2024-05-15 00:22:41.518617] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.639 [2024-05-15 00:22:41.615366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.639 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:17.038 ====================================== 00:07:17.038 busy:1901862150 (cyc) 00:07:17.038 total_run_count: 5359000 00:07:17.038 tsc_hz: 1900000000 (cyc) 00:07:17.038 ====================================== 00:07:17.038 poller_cost: 354 (cyc), 186 (nsec) 00:07:17.038 00:07:17.038 real 0m1.419s 00:07:17.038 user 0m1.254s 00:07:17.038 sys 0m0.158s 00:07:17.038 00:22:42 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:17.038 00:22:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:17.038 ************************************ 00:07:17.038 END TEST thread_poller_perf 00:07:17.038 ************************************ 00:07:17.038 00:22:42 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:17.038 00:07:17.038 real 0m3.066s 00:07:17.038 user 0m2.619s 00:07:17.038 sys 0m0.442s 00:07:17.038 00:22:42 thread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:17.038 00:22:42 thread -- common/autotest_common.sh@10 -- # set +x 00:07:17.038 ************************************ 00:07:17.038 END TEST thread 00:07:17.038 ************************************ 00:07:17.038 00:22:42 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:07:17.038 00:22:42 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:07:17.038 00:22:42 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:17.038 00:22:42 -- common/autotest_common.sh@10 -- # set +x 00:07:17.038 ************************************ 00:07:17.038 START TEST accel 00:07:17.038 ************************************ 00:07:17.038 00:22:42 accel -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:07:17.038 * Looking for test storage... 00:07:17.038 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:07:17.038 00:22:42 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:17.038 00:22:42 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:17.038 00:22:42 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:17.038 00:22:42 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1810540 00:07:17.038 00:22:42 accel -- accel/accel.sh@63 -- # waitforlisten 1810540 00:07:17.038 00:22:42 accel -- common/autotest_common.sh@828 -- # '[' -z 1810540 ']' 00:07:17.038 00:22:42 accel -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.038 00:22:42 accel -- common/autotest_common.sh@833 -- # local max_retries=100 00:07:17.038 00:22:42 accel -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.038 00:22:42 accel -- common/autotest_common.sh@837 -- # xtrace_disable 00:07:17.038 00:22:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:17.038 00:22:42 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:17.038 00:22:42 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:17.038 00:22:42 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:17.038 00:22:42 accel -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:17.038 00:22:42 accel -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:17.038 00:22:42 accel -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:17.038 00:22:42 accel -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:17.038 00:22:42 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.038 00:22:42 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:17.038 00:22:42 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:17.038 00:22:42 accel -- accel/accel.sh@41 -- # jq -r . 00:07:17.038 [2024-05-15 00:22:43.040877] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:17.038 [2024-05-15 00:22:43.041011] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810540 ] 00:07:17.038 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.038 [2024-05-15 00:22:43.171401] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.311 [2024-05-15 00:22:43.262435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.311 [2024-05-15 00:22:43.266938] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:17.311 [2024-05-15 00:22:43.274906] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:27.292 00:22:52 accel -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:07:27.292 00:22:52 accel -- common/autotest_common.sh@861 -- # return 0 00:07:27.292 00:22:52 accel -- accel/accel.sh@65 -- # [[ 1 -gt 0 ]] 00:07:27.292 00:22:52 accel -- accel/accel.sh@65 -- # check_save_config dsa_scan_accel_module 00:07:27.292 00:22:52 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:07:27.292 00:22:52 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.292 00:22:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.292 00:22:52 accel -- accel/accel.sh@56 -- # grep dsa_scan_accel_module 00:07:27.292 00:22:52 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:07:27.292 00:22:52 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.292 "method": "dsa_scan_accel_module", 00:07:27.292 00:22:52 accel -- accel/accel.sh@66 -- # [[ 1 -gt 0 ]] 00:07:27.292 00:22:52 accel -- accel/accel.sh@66 -- # check_save_config iaa_scan_accel_module 00:07:27.292 00:22:52 accel -- accel/accel.sh@56 -- # rpc_cmd save_config 00:07:27.292 00:22:52 accel -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:07:27.292 00:22:52 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.292 00:22:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.292 00:22:52 accel -- accel/accel.sh@56 -- # grep iaa_scan_accel_module 00:07:27.292 00:22:52 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.292 "method": "iaa_scan_accel_module" 00:07:27.292 00:22:52 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:27.292 00:22:52 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:27.292 00:22:52 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:27.292 00:22:52 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:27.292 00:22:52 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:27.292 00:22:52 accel -- common/autotest_common.sh@560 -- # xtrace_disable 00:07:27.292 00:22:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.292 00:22:52 accel -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:07:27.292 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.292 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:27.292 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.292 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:27.292 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.292 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:27.292 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.292 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:27.292 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.292 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:27.292 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.292 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:27.292 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.292 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=iaa 00:07:27.292 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.292 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=iaa 00:07:27.292 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.292 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.292 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.292 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.292 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.292 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.293 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.293 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.293 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.293 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.293 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.293 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.293 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:27.293 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.293 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.293 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.293 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:27.293 00:22:52 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:27.293 00:22:52 accel -- accel/accel.sh@72 -- # IFS== 00:07:27.293 00:22:52 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:27.293 00:22:52 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:07:27.293 00:22:52 accel -- accel/accel.sh@75 -- # killprocess 1810540 00:07:27.293 00:22:52 accel -- common/autotest_common.sh@947 -- # '[' -z 1810540 ']' 00:07:27.293 00:22:52 accel -- common/autotest_common.sh@951 -- # kill -0 1810540 00:07:27.293 00:22:52 accel -- common/autotest_common.sh@952 -- # uname 00:07:27.293 00:22:52 accel -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:07:27.293 00:22:52 accel -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1810540 00:07:27.293 00:22:52 accel -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:07:27.293 00:22:52 accel -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:07:27.293 00:22:52 accel -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1810540' 00:07:27.293 killing process with pid 1810540 00:07:27.293 00:22:52 accel -- common/autotest_common.sh@966 -- # kill 1810540 00:07:27.293 00:22:52 accel -- common/autotest_common.sh@971 -- # wait 1810540 00:07:29.831 00:22:55 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:29.831 00:22:55 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:29.831 00:22:55 accel -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:07:29.831 00:22:55 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:29.831 00:22:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.831 00:22:55 accel.accel_help -- common/autotest_common.sh@1122 -- # accel_perf -h 00:07:29.831 00:22:55 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:29.831 00:22:55 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:29.831 00:22:55 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.831 00:22:55 accel.accel_help -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:29.831 00:22:55 accel.accel_help -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:29.831 00:22:55 accel.accel_help -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:29.831 00:22:55 accel.accel_help -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:29.831 00:22:55 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.831 00:22:55 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.831 00:22:55 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:29.831 00:22:55 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:29.831 00:22:55 accel.accel_help -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:29.831 00:22:55 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:29.831 00:22:55 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:29.831 00:22:55 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:07:29.831 00:22:55 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:29.831 00:22:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.831 ************************************ 00:07:29.831 START TEST accel_missing_filename 00:07:29.831 ************************************ 00:07:29.831 00:22:55 accel.accel_missing_filename -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress 00:07:29.831 00:22:55 accel.accel_missing_filename -- common/autotest_common.sh@649 -- # local es=0 00:07:29.831 00:22:55 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:29.831 00:22:55 accel.accel_missing_filename -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:29.831 00:22:55 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:29.831 00:22:55 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:29.831 00:22:55 accel.accel_missing_filename -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:29.831 00:22:55 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress 00:07:29.831 00:22:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:29.831 00:22:55 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:29.831 00:22:55 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.831 00:22:55 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:29.831 00:22:55 accel.accel_missing_filename -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:29.831 00:22:55 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:29.831 00:22:55 accel.accel_missing_filename -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:29.831 00:22:55 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.831 00:22:55 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.831 00:22:55 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:29.831 00:22:55 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:29.831 [2024-05-15 00:22:55.921074] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:29.831 [2024-05-15 00:22:55.921201] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1813269 ] 00:07:30.091 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.091 [2024-05-15 00:22:56.051213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.091 [2024-05-15 00:22:56.153273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.091 [2024-05-15 00:22:56.157838] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:30.091 [2024-05-15 00:22:56.165807] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:36.667 [2024-05-15 00:23:02.559399] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:38.577 [2024-05-15 00:23:04.424602] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:38.577 A filename is required. 00:07:38.577 00:23:04 accel.accel_missing_filename -- common/autotest_common.sh@652 -- # es=234 00:07:38.577 00:23:04 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:38.577 00:23:04 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # es=106 00:07:38.577 00:23:04 accel.accel_missing_filename -- common/autotest_common.sh@662 -- # case "$es" in 00:07:38.578 00:23:04 accel.accel_missing_filename -- common/autotest_common.sh@669 -- # es=1 00:07:38.578 00:23:04 accel.accel_missing_filename -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:38.578 00:07:38.578 real 0m8.712s 00:07:38.578 user 0m2.304s 00:07:38.578 sys 0m0.268s 00:07:38.578 00:23:04 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:38.578 00:23:04 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:38.578 ************************************ 00:07:38.578 END TEST accel_missing_filename 00:07:38.578 ************************************ 00:07:38.578 00:23:04 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:07:38.578 00:23:04 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:07:38.578 00:23:04 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:38.578 00:23:04 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.578 ************************************ 00:07:38.578 START TEST accel_compress_verify 00:07:38.578 ************************************ 00:07:38.578 00:23:04 accel.accel_compress_verify -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:07:38.578 00:23:04 accel.accel_compress_verify -- common/autotest_common.sh@649 -- # local es=0 00:07:38.578 00:23:04 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:07:38.578 00:23:04 accel.accel_compress_verify -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:38.578 00:23:04 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:38.578 00:23:04 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:38.578 00:23:04 accel.accel_compress_verify -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:38.578 00:23:04 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:07:38.578 00:23:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:07:38.578 00:23:04 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:38.578 00:23:04 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.578 00:23:04 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:38.578 00:23:04 accel.accel_compress_verify -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:38.578 00:23:04 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:38.578 00:23:04 accel.accel_compress_verify -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:38.578 00:23:04 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.578 00:23:04 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.578 00:23:04 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:38.578 00:23:04 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:38.578 [2024-05-15 00:23:04.698134] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:38.578 [2024-05-15 00:23:04.698241] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1814835 ] 00:07:38.838 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.838 [2024-05-15 00:23:04.817928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.838 [2024-05-15 00:23:04.921356] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.838 [2024-05-15 00:23:04.925883] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:38.838 [2024-05-15 00:23:04.933860] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:45.419 [2024-05-15 00:23:11.320075] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:47.331 [2024-05-15 00:23:13.175018] accel_perf.c:1393:main: *ERROR*: ERROR starting application 00:07:47.331 00:07:47.331 Compression does not support the verify option, aborting. 00:07:47.331 00:23:13 accel.accel_compress_verify -- common/autotest_common.sh@652 -- # es=161 00:07:47.331 00:23:13 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:47.331 00:23:13 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # es=33 00:07:47.331 00:23:13 accel.accel_compress_verify -- common/autotest_common.sh@662 -- # case "$es" in 00:07:47.331 00:23:13 accel.accel_compress_verify -- common/autotest_common.sh@669 -- # es=1 00:07:47.331 00:23:13 accel.accel_compress_verify -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:47.331 00:07:47.331 real 0m8.676s 00:07:47.331 user 0m2.277s 00:07:47.331 sys 0m0.260s 00:07:47.331 00:23:13 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:47.331 00:23:13 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:47.331 ************************************ 00:07:47.331 END TEST accel_compress_verify 00:07:47.331 ************************************ 00:07:47.331 00:23:13 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:47.331 00:23:13 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:07:47.331 00:23:13 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:47.331 00:23:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.331 ************************************ 00:07:47.331 START TEST accel_wrong_workload 00:07:47.331 ************************************ 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w foobar 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@649 -- # local es=0 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w foobar 00:07:47.331 00:23:13 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:47.331 00:23:13 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:47.331 00:23:13 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.331 00:23:13 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:47.331 00:23:13 accel.accel_wrong_workload -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:47.331 00:23:13 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:47.331 00:23:13 accel.accel_wrong_workload -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:47.331 00:23:13 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.331 00:23:13 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.331 00:23:13 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:47.331 00:23:13 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:47.331 Unsupported workload type: foobar 00:07:47.331 [2024-05-15 00:23:13.432581] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:47.331 accel_perf options: 00:07:47.331 [-h help message] 00:07:47.331 [-q queue depth per core] 00:07:47.331 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:47.331 [-T number of threads per core 00:07:47.331 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:47.331 [-t time in seconds] 00:07:47.331 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:47.331 [ dif_verify, , dif_generate, dif_generate_copy 00:07:47.331 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:47.331 [-l for compress/decompress workloads, name of uncompressed input file 00:07:47.331 [-S for crc32c workload, use this seed value (default 0) 00:07:47.331 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:47.331 [-f for fill workload, use this BYTE value (default 255) 00:07:47.331 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:47.331 [-y verify result if this switch is on] 00:07:47.331 [-a tasks to allocate per core (default: same value as -q)] 00:07:47.331 Can be used to spread operations across a wider range of memory. 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@652 -- # es=1 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:47.331 00:07:47.331 real 0m0.057s 00:07:47.331 user 0m0.063s 00:07:47.331 sys 0m0.026s 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:47.331 00:23:13 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:47.331 ************************************ 00:07:47.331 END TEST accel_wrong_workload 00:07:47.331 ************************************ 00:07:47.331 00:23:13 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:47.331 00:23:13 accel -- common/autotest_common.sh@1098 -- # '[' 10 -le 1 ']' 00:07:47.331 00:23:13 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:47.331 00:23:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.591 ************************************ 00:07:47.591 START TEST accel_negative_buffers 00:07:47.591 ************************************ 00:07:47.591 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@1122 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:47.591 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@649 -- # local es=0 00:07:47.591 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:47.591 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@637 -- # local arg=accel_perf 00:07:47.591 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:47.591 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # type -t accel_perf 00:07:47.591 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:07:47.591 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # accel_perf -t 1 -w xor -y -x -1 00:07:47.591 00:23:13 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:47.591 00:23:13 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:47.591 00:23:13 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.591 00:23:13 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:47.591 00:23:13 accel.accel_negative_buffers -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:47.591 00:23:13 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:47.591 00:23:13 accel.accel_negative_buffers -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:47.591 00:23:13 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.591 00:23:13 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.591 00:23:13 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:47.591 00:23:13 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:47.591 -x option must be non-negative. 00:07:47.591 [2024-05-15 00:23:13.544117] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:47.591 accel_perf options: 00:07:47.591 [-h help message] 00:07:47.591 [-q queue depth per core] 00:07:47.591 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:47.591 [-T number of threads per core 00:07:47.591 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:47.591 [-t time in seconds] 00:07:47.591 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:47.591 [ dif_verify, , dif_generate, dif_generate_copy 00:07:47.591 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:47.591 [-l for compress/decompress workloads, name of uncompressed input file 00:07:47.591 [-S for crc32c workload, use this seed value (default 0) 00:07:47.592 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:47.592 [-f for fill workload, use this BYTE value (default 255) 00:07:47.592 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:47.592 [-y verify result if this switch is on] 00:07:47.592 [-a tasks to allocate per core (default: same value as -q)] 00:07:47.592 Can be used to spread operations across a wider range of memory. 00:07:47.592 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@652 -- # es=1 00:07:47.592 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:07:47.592 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:07:47.592 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:07:47.592 00:07:47.592 real 0m0.052s 00:07:47.592 user 0m0.054s 00:07:47.592 sys 0m0.029s 00:07:47.592 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:47.592 00:23:13 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:47.592 ************************************ 00:07:47.592 END TEST accel_negative_buffers 00:07:47.592 ************************************ 00:07:47.592 00:23:13 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:47.592 00:23:13 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:07:47.592 00:23:13 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:47.592 00:23:13 accel -- common/autotest_common.sh@10 -- # set +x 00:07:47.592 ************************************ 00:07:47.592 START TEST accel_crc32c 00:07:47.592 ************************************ 00:07:47.592 00:23:13 accel.accel_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:47.592 00:23:13 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:47.592 [2024-05-15 00:23:13.652150] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:47.592 [2024-05-15 00:23:13.652251] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1816646 ] 00:07:47.592 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.852 [2024-05-15 00:23:13.768326] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.852 [2024-05-15 00:23:13.873628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.852 [2024-05-15 00:23:13.878137] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:47.852 [2024-05-15 00:23:13.886116] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.531 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=dsa 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=dsa 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:54.532 00:23:20 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:57.828 00:23:23 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:57.828 00:07:57.828 real 0m9.680s 00:07:57.828 user 0m3.278s 00:07:57.828 sys 0m0.236s 00:07:57.828 00:23:23 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:07:57.828 00:23:23 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:57.828 ************************************ 00:07:57.828 END TEST accel_crc32c 00:07:57.828 ************************************ 00:07:57.828 00:23:23 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:57.828 00:23:23 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:07:57.828 00:23:23 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:07:57.828 00:23:23 accel -- common/autotest_common.sh@10 -- # set +x 00:07:57.828 ************************************ 00:07:57.828 START TEST accel_crc32c_C2 00:07:57.828 ************************************ 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:57.828 00:23:23 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:57.828 [2024-05-15 00:23:23.380310] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:07:57.828 [2024-05-15 00:23:23.380391] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1818588 ] 00:07:57.828 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.828 [2024-05-15 00:23:23.478553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.828 [2024-05-15 00:23:23.589097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.828 [2024-05-15 00:23:23.593633] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:57.828 [2024-05-15 00:23:23.601612] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=dsa 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=dsa 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:04.408 00:23:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:06.950 00:08:06.950 real 0m9.676s 00:08:06.950 user 0m3.300s 00:08:06.950 sys 0m0.211s 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:06.950 00:23:33 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:06.950 ************************************ 00:08:06.950 END TEST accel_crc32c_C2 00:08:06.950 ************************************ 00:08:06.950 00:23:33 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:08:06.950 00:23:33 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:08:06.950 00:23:33 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:06.950 00:23:33 accel -- common/autotest_common.sh@10 -- # set +x 00:08:06.950 ************************************ 00:08:06.950 START TEST accel_copy 00:08:06.950 ************************************ 00:08:06.950 00:23:33 accel.accel_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy -y 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.950 00:23:33 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:06.951 00:23:33 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:08:06.951 00:23:33 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:08:07.210 [2024-05-15 00:23:33.132216] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:08:07.210 [2024-05-15 00:23:33.132348] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1820553 ] 00:08:07.210 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.210 [2024-05-15 00:23:33.246285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.210 [2024-05-15 00:23:33.350162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.210 [2024-05-15 00:23:33.354665] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:07.210 [2024-05-15 00:23:33.362640] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.807 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val=dsa 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=dsa 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:13.808 00:23:39 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:08:17.109 00:23:42 accel.accel_copy -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:17.109 00:08:17.109 real 0m9.689s 00:08:17.109 user 0m3.285s 00:08:17.109 sys 0m0.243s 00:08:17.109 00:23:42 accel.accel_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:17.109 00:23:42 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:08:17.109 ************************************ 00:08:17.109 END TEST accel_copy 00:08:17.109 ************************************ 00:08:17.109 00:23:42 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:17.109 00:23:42 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:08:17.109 00:23:42 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:17.109 00:23:42 accel -- common/autotest_common.sh@10 -- # set +x 00:08:17.109 ************************************ 00:08:17.109 START TEST accel_fill 00:08:17.109 ************************************ 00:08:17.109 00:23:42 accel.accel_fill -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:08:17.109 00:23:42 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:08:17.109 [2024-05-15 00:23:42.888765] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:08:17.109 [2024-05-15 00:23:42.888897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1822367 ] 00:08:17.109 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.109 [2024-05-15 00:23:43.017689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.109 [2024-05-15 00:23:43.116869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.109 [2024-05-15 00:23:43.121410] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:17.109 [2024-05-15 00:23:43.129376] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val=dsa 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=dsa 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.690 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:23.691 00:23:49 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:23.691 00:23:49 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:23.691 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:23.691 00:23:49 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:08:26.988 00:23:52 accel.accel_fill -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:26.988 00:08:26.988 real 0m9.717s 00:08:26.988 user 0m3.283s 00:08:26.988 sys 0m0.270s 00:08:26.988 00:23:52 accel.accel_fill -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:26.988 00:23:52 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:08:26.988 ************************************ 00:08:26.988 END TEST accel_fill 00:08:26.988 ************************************ 00:08:26.988 00:23:52 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:08:26.988 00:23:52 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:08:26.988 00:23:52 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:26.988 00:23:52 accel -- common/autotest_common.sh@10 -- # set +x 00:08:26.988 ************************************ 00:08:26.988 START TEST accel_copy_crc32c 00:08:26.988 ************************************ 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:08:26.988 00:23:52 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:08:26.988 [2024-05-15 00:23:52.652417] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:08:26.988 [2024-05-15 00:23:52.652523] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1824374 ] 00:08:26.988 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.988 [2024-05-15 00:23:52.768801] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.988 [2024-05-15 00:23:52.866978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.988 [2024-05-15 00:23:52.871470] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:26.988 [2024-05-15 00:23:52.879450] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=dsa 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=dsa 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.564 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:33.565 00:23:59 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:36.862 00:08:36.862 real 0m9.672s 00:08:36.862 user 0m3.279s 00:08:36.862 sys 0m0.228s 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:36.862 00:24:02 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:08:36.862 ************************************ 00:08:36.862 END TEST accel_copy_crc32c 00:08:36.862 ************************************ 00:08:36.862 00:24:02 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:08:36.862 00:24:02 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:08:36.862 00:24:02 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:36.862 00:24:02 accel -- common/autotest_common.sh@10 -- # set +x 00:08:36.862 ************************************ 00:08:36.862 START TEST accel_copy_crc32c_C2 00:08:36.862 ************************************ 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:36.862 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:08:36.863 00:24:02 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:08:36.863 [2024-05-15 00:24:02.386107] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:08:36.863 [2024-05-15 00:24:02.386242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1826399 ] 00:08:36.863 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.863 [2024-05-15 00:24:02.517162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.863 [2024-05-15 00:24:02.617616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.863 [2024-05-15 00:24:02.622144] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:36.863 [2024-05-15 00:24:02.630113] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.557 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=dsa 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=dsa 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:43.558 00:24:09 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:46.100 00:08:46.100 real 0m9.704s 00:08:46.100 user 0m3.286s 00:08:46.100 sys 0m0.254s 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:46.100 00:24:12 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:08:46.100 ************************************ 00:08:46.100 END TEST accel_copy_crc32c_C2 00:08:46.100 ************************************ 00:08:46.100 00:24:12 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:08:46.100 00:24:12 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:08:46.100 00:24:12 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:46.100 00:24:12 accel -- common/autotest_common.sh@10 -- # set +x 00:08:46.100 ************************************ 00:08:46.100 START TEST accel_dualcast 00:08:46.100 ************************************ 00:08:46.100 00:24:12 accel.accel_dualcast -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dualcast -y 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:08:46.100 00:24:12 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:08:46.100 [2024-05-15 00:24:12.139862] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:08:46.100 [2024-05-15 00:24:12.139973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828638 ] 00:08:46.100 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.100 [2024-05-15 00:24:12.254954] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.360 [2024-05-15 00:24:12.356574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.360 [2024-05-15 00:24:12.361052] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:46.360 [2024-05-15 00:24:12.369034] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dsa 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=dsa 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:52.953 00:24:18 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:08:56.244 00:24:21 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:56.244 00:08:56.244 real 0m9.672s 00:08:56.244 user 0m3.262s 00:08:56.244 sys 0m0.244s 00:08:56.244 00:24:21 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # xtrace_disable 00:08:56.244 00:24:21 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:08:56.244 ************************************ 00:08:56.244 END TEST accel_dualcast 00:08:56.244 ************************************ 00:08:56.244 00:24:21 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:08:56.244 00:24:21 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:08:56.244 00:24:21 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:08:56.244 00:24:21 accel -- common/autotest_common.sh@10 -- # set +x 00:08:56.244 ************************************ 00:08:56.244 START TEST accel_compare 00:08:56.244 ************************************ 00:08:56.244 00:24:21 accel.accel_compare -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compare -y 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:08:56.244 00:24:21 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:08:56.244 [2024-05-15 00:24:21.866626] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:08:56.244 [2024-05-15 00:24:21.866733] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1830451 ] 00:08:56.244 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.244 [2024-05-15 00:24:21.984558] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.244 [2024-05-15 00:24:22.094897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.244 [2024-05-15 00:24:22.099507] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:56.244 [2024-05-15 00:24:22.107484] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val=dsa 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=dsa 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:02.825 00:24:28 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:09:06.110 00:24:31 accel.accel_compare -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:09:06.110 00:09:06.110 real 0m9.714s 00:09:06.110 user 0m3.307s 00:09:06.110 sys 0m0.240s 00:09:06.110 00:24:31 accel.accel_compare -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:06.110 00:24:31 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:09:06.110 ************************************ 00:09:06.110 END TEST accel_compare 00:09:06.110 ************************************ 00:09:06.110 00:24:31 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:09:06.110 00:24:31 accel -- common/autotest_common.sh@1098 -- # '[' 7 -le 1 ']' 00:09:06.110 00:24:31 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:06.110 00:24:31 accel -- common/autotest_common.sh@10 -- # set +x 00:09:06.110 ************************************ 00:09:06.110 START TEST accel_xor 00:09:06.110 ************************************ 00:09:06.110 00:24:31 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:06.110 00:24:31 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:06.110 [2024-05-15 00:24:31.622725] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:09:06.110 [2024-05-15 00:24:31.622797] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1832537 ] 00:09:06.110 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.110 [2024-05-15 00:24:31.709988] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.110 [2024-05-15 00:24:31.809603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.110 [2024-05-15 00:24:31.814089] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:06.110 [2024-05-15 00:24:31.822069] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:12.680 00:24:38 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:15.219 00:24:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:15.219 00:09:15.219 real 0m9.626s 00:09:15.219 user 0m3.254s 00:09:15.219 sys 0m0.210s 00:09:15.219 00:24:41 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:15.219 00:24:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:15.219 ************************************ 00:09:15.219 END TEST accel_xor 00:09:15.219 ************************************ 00:09:15.219 00:24:41 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:09:15.219 00:24:41 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:09:15.219 00:24:41 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:15.220 00:24:41 accel -- common/autotest_common.sh@10 -- # set +x 00:09:15.220 ************************************ 00:09:15.220 START TEST accel_xor 00:09:15.220 ************************************ 00:09:15.220 00:24:41 accel.accel_xor -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w xor -y -x 3 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:09:15.220 00:24:41 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:09:15.220 [2024-05-15 00:24:41.313870] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:09:15.220 [2024-05-15 00:24:41.313973] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1834354 ] 00:09:15.479 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.479 [2024-05-15 00:24:41.427315] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.479 [2024-05-15 00:24:41.529774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.480 [2024-05-15 00:24:41.534299] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:15.480 [2024-05-15 00:24:41.542250] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:22.066 00:24:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:09:25.399 00:24:50 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:25.399 00:09:25.399 real 0m9.677s 00:09:25.399 user 0m3.273s 00:09:25.399 sys 0m0.237s 00:09:25.399 00:24:50 accel.accel_xor -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:25.399 00:24:50 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:09:25.399 ************************************ 00:09:25.399 END TEST accel_xor 00:09:25.399 ************************************ 00:09:25.399 00:24:50 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:09:25.399 00:24:50 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:09:25.399 00:24:50 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:25.399 00:24:50 accel -- common/autotest_common.sh@10 -- # set +x 00:09:25.399 ************************************ 00:09:25.399 START TEST accel_dif_verify 00:09:25.399 ************************************ 00:09:25.399 00:24:51 accel.accel_dif_verify -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_verify 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:09:25.399 00:24:51 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:09:25.399 [2024-05-15 00:24:51.044086] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:09:25.399 [2024-05-15 00:24:51.044187] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836178 ] 00:09:25.399 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.399 [2024-05-15 00:24:51.159741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.399 [2024-05-15 00:24:51.259243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.399 [2024-05-15 00:24:51.263749] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:25.399 [2024-05-15 00:24:51.271730] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dsa 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=dsa 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:32.033 00:24:57 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:32.034 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:32.034 00:24:57 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:34.571 00:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:34.572 00:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:34.572 00:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:34.572 00:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:34.572 00:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:34.572 00:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:34.572 00:25:00 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:09:34.572 00:25:00 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:09:34.572 00:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:09:34.572 00:25:00 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:09:34.572 00:25:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:09:34.572 00:25:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:09:34.572 00:25:00 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:09:34.572 00:09:34.572 real 0m9.694s 00:09:34.572 user 0m3.300s 00:09:34.572 sys 0m0.236s 00:09:34.572 00:25:00 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:34.572 00:25:00 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:09:34.572 ************************************ 00:09:34.572 END TEST accel_dif_verify 00:09:34.572 ************************************ 00:09:34.572 00:25:00 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:09:34.572 00:25:00 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:09:34.572 00:25:00 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:34.572 00:25:00 accel -- common/autotest_common.sh@10 -- # set +x 00:09:34.830 ************************************ 00:09:34.830 START TEST accel_dif_generate 00:09:34.830 ************************************ 00:09:34.830 00:25:00 accel.accel_dif_generate -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate 00:09:34.830 00:25:00 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:09:34.830 00:25:00 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:09:34.830 00:25:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:34.830 00:25:00 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:34.830 00:25:00 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:09:34.831 00:25:00 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:09:34.831 00:25:00 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:09:34.831 00:25:00 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:34.831 00:25:00 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:34.831 00:25:00 accel.accel_dif_generate -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:34.831 00:25:00 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:34.831 00:25:00 accel.accel_dif_generate -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:34.831 00:25:00 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:34.831 00:25:00 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:34.831 00:25:00 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:09:34.831 00:25:00 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:09:34.831 [2024-05-15 00:25:00.796025] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:09:34.831 [2024-05-15 00:25:00.796130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838258 ] 00:09:34.831 EAL: No free 2048 kB hugepages reported on node 1 00:09:34.831 [2024-05-15 00:25:00.910093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:35.089 [2024-05-15 00:25:01.011898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:35.089 [2024-05-15 00:25:01.016409] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:35.089 [2024-05-15 00:25:01.024389] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:41.657 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.657 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.657 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.657 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.657 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.657 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.657 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.657 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.657 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:09:41.657 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.657 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.657 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.657 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:41.658 00:25:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:09:44.948 00:25:10 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:09:44.948 00:09:44.948 real 0m9.675s 00:09:44.948 user 0m3.268s 00:09:44.948 sys 0m0.236s 00:09:44.948 00:25:10 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:44.948 00:25:10 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:09:44.948 ************************************ 00:09:44.948 END TEST accel_dif_generate 00:09:44.948 ************************************ 00:09:44.948 00:25:10 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:09:44.948 00:25:10 accel -- common/autotest_common.sh@1098 -- # '[' 6 -le 1 ']' 00:09:44.948 00:25:10 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:44.948 00:25:10 accel -- common/autotest_common.sh@10 -- # set +x 00:09:44.948 ************************************ 00:09:44.948 START TEST accel_dif_generate_copy 00:09:44.948 ************************************ 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w dif_generate_copy 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:09:44.948 00:25:10 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:09:44.948 [2024-05-15 00:25:10.533456] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:09:44.948 [2024-05-15 00:25:10.533605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1840069 ] 00:09:44.948 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.948 [2024-05-15 00:25:10.649162] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.948 [2024-05-15 00:25:10.751695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.948 [2024-05-15 00:25:10.756185] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:44.949 [2024-05-15 00:25:10.764163] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dsa 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=dsa 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:51.529 00:25:17 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:09:54.064 00:09:54.064 real 0m9.693s 00:09:54.064 user 0m3.273s 00:09:54.064 sys 0m0.247s 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:09:54.064 00:25:20 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:09:54.064 ************************************ 00:09:54.064 END TEST accel_dif_generate_copy 00:09:54.064 ************************************ 00:09:54.064 00:25:20 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:09:54.064 00:25:20 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:54.064 00:25:20 accel -- common/autotest_common.sh@1098 -- # '[' 8 -le 1 ']' 00:09:54.064 00:25:20 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:09:54.064 00:25:20 accel -- common/autotest_common.sh@10 -- # set +x 00:09:54.324 ************************************ 00:09:54.324 START TEST accel_comp 00:09:54.324 ************************************ 00:09:54.324 00:25:20 accel.accel_comp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:09:54.324 00:25:20 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:09:54.324 [2024-05-15 00:25:20.287879] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:09:54.324 [2024-05-15 00:25:20.287985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1841988 ] 00:09:54.324 EAL: No free 2048 kB hugepages reported on node 1 00:09:54.324 [2024-05-15 00:25:20.403919] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.583 [2024-05-15 00:25:20.503997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.583 [2024-05-15 00:25:20.508487] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:54.583 [2024-05-15 00:25:20.516468] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.152 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val=iaa 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=iaa 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:01.153 00:25:26 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:10:04.441 00:25:29 accel.accel_comp -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:04.441 00:10:04.441 real 0m9.680s 00:10:04.441 user 0m3.288s 00:10:04.441 sys 0m0.224s 00:10:04.441 00:25:29 accel.accel_comp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:04.441 00:25:29 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:10:04.441 ************************************ 00:10:04.441 END TEST accel_comp 00:10:04.441 ************************************ 00:10:04.441 00:25:29 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:04.441 00:25:29 accel -- common/autotest_common.sh@1098 -- # '[' 9 -le 1 ']' 00:10:04.441 00:25:29 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:04.441 00:25:29 accel -- common/autotest_common.sh@10 -- # set +x 00:10:04.441 ************************************ 00:10:04.441 START TEST accel_decomp 00:10:04.441 ************************************ 00:10:04.441 00:25:29 accel.accel_decomp -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:10:04.441 00:25:29 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:10:04.441 [2024-05-15 00:25:30.030091] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:10:04.441 [2024-05-15 00:25:30.030203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1843980 ] 00:10:04.441 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.441 [2024-05-15 00:25:30.145485] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.441 [2024-05-15 00:25:30.245216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.441 [2024-05-15 00:25:30.249716] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:04.441 [2024-05-15 00:25:30.257699] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=iaa 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=iaa 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:11.008 00:25:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.603 00:25:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:13.603 00:25:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:13.603 00:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:13.603 00:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.603 00:25:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:13.603 00:25:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:13.603 00:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:13.603 00:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.603 00:25:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:13.603 00:25:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:13.604 00:25:39 accel.accel_decomp -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:13.604 00:10:13.604 real 0m9.667s 00:10:13.604 user 0m3.286s 00:10:13.604 sys 0m0.219s 00:10:13.604 00:25:39 accel.accel_decomp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:13.604 00:25:39 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:10:13.604 ************************************ 00:10:13.604 END TEST accel_decomp 00:10:13.604 ************************************ 00:10:13.604 00:25:39 accel -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:13.604 00:25:39 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:10:13.604 00:25:39 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:13.604 00:25:39 accel -- common/autotest_common.sh@10 -- # set +x 00:10:13.604 ************************************ 00:10:13.604 START TEST accel_decmop_full 00:10:13.604 ************************************ 00:10:13.604 00:25:39 accel.accel_decmop_full -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@16 -- # local accel_opc 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@17 -- # local accel_module 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@12 -- # build_accel_config 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@40 -- # local IFS=, 00:10:13.604 00:25:39 accel.accel_decmop_full -- accel/accel.sh@41 -- # jq -r . 00:10:13.604 [2024-05-15 00:25:39.749786] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:10:13.604 [2024-05-15 00:25:39.749887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845796 ] 00:10:13.864 EAL: No free 2048 kB hugepages reported on node 1 00:10:13.864 [2024-05-15 00:25:39.862950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.864 [2024-05-15 00:25:39.966727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.864 [2024-05-15 00:25:39.971244] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:13.864 [2024-05-15 00:25:39.979218] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=0x1 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=decompress 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=iaa 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@22 -- # accel_module=iaa 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=32 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=1 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val='1 seconds' 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val=Yes 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:20.432 00:25:46 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@20 -- # val= 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@21 -- # case "$var" in 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # IFS=: 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@19 -- # read -r var val 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:23.719 00:25:49 accel.accel_decmop_full -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:23.719 00:10:23.719 real 0m9.693s 00:10:23.719 user 0m3.297s 00:10:23.719 sys 0m0.230s 00:10:23.719 00:25:49 accel.accel_decmop_full -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:23.719 00:25:49 accel.accel_decmop_full -- common/autotest_common.sh@10 -- # set +x 00:10:23.719 ************************************ 00:10:23.719 END TEST accel_decmop_full 00:10:23.719 ************************************ 00:10:23.719 00:25:49 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.719 00:25:49 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:10:23.719 00:25:49 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:23.719 00:25:49 accel -- common/autotest_common.sh@10 -- # set +x 00:10:23.719 ************************************ 00:10:23.719 START TEST accel_decomp_mcore 00:10:23.719 ************************************ 00:10:23.719 00:25:49 accel.accel_decomp_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.719 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:23.719 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:23.719 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:23.719 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:23.719 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.719 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:10:23.719 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:23.719 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:23.719 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:23.719 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:23.720 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:23.720 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:23.720 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:23.720 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:23.720 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:23.720 00:25:49 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:23.720 [2024-05-15 00:25:49.502879] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:10:23.720 [2024-05-15 00:25:49.502985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847808 ] 00:10:23.720 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.720 [2024-05-15 00:25:49.613915] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.720 [2024-05-15 00:25:49.716057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.720 [2024-05-15 00:25:49.716135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.720 [2024-05-15 00:25:49.716236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.720 [2024-05-15 00:25:49.716245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.720 [2024-05-15 00:25:49.720765] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:23.720 [2024-05-15 00:25:49.728752] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=iaa 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=iaa 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:30.286 00:25:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:33.574 00:10:33.574 real 0m9.707s 00:10:33.574 user 0m31.107s 00:10:33.574 sys 0m0.239s 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:33.574 00:25:59 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:33.574 ************************************ 00:10:33.574 END TEST accel_decomp_mcore 00:10:33.574 ************************************ 00:10:33.574 00:25:59 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:33.574 00:25:59 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:10:33.574 00:25:59 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:33.574 00:25:59 accel -- common/autotest_common.sh@10 -- # set +x 00:10:33.574 ************************************ 00:10:33.574 START TEST accel_decomp_full_mcore 00:10:33.574 ************************************ 00:10:33.574 00:25:59 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:33.574 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:10:33.574 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:10:33.574 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:33.574 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:33.574 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:33.574 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:10:33.574 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:10:33.575 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:33.575 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:33.575 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:33.575 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:33.575 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:33.575 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:33.575 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:33.575 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:10:33.575 00:25:59 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:10:33.575 [2024-05-15 00:25:59.269505] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:10:33.575 [2024-05-15 00:25:59.269620] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849699 ] 00:10:33.575 EAL: No free 2048 kB hugepages reported on node 1 00:10:33.575 [2024-05-15 00:25:59.384433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.575 [2024-05-15 00:25:59.493258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.575 [2024-05-15 00:25:59.493326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.575 [2024-05-15 00:25:59.493427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.575 [2024-05-15 00:25:59.493435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.575 [2024-05-15 00:25:59.497953] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:33.575 [2024-05-15 00:25:59.505939] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=iaa 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=iaa 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.142 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:40.143 00:26:05 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:43.420 00:10:43.420 real 0m9.737s 00:10:43.420 user 0m31.173s 00:10:43.420 sys 0m0.237s 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:43.420 00:26:08 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:10:43.420 ************************************ 00:10:43.420 END TEST accel_decomp_full_mcore 00:10:43.420 ************************************ 00:10:43.420 00:26:09 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:43.420 00:26:09 accel -- common/autotest_common.sh@1098 -- # '[' 11 -le 1 ']' 00:10:43.421 00:26:09 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:43.421 00:26:09 accel -- common/autotest_common.sh@10 -- # set +x 00:10:43.421 ************************************ 00:10:43.421 START TEST accel_decomp_mthread 00:10:43.421 ************************************ 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:43.421 00:26:09 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:43.421 [2024-05-15 00:26:09.069658] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:10:43.421 [2024-05-15 00:26:09.069762] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1851667 ] 00:10:43.421 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.421 [2024-05-15 00:26:09.186025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:43.421 [2024-05-15 00:26:09.291396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:43.421 [2024-05-15 00:26:09.295892] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:43.421 [2024-05-15 00:26:09.303873] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=iaa 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=iaa 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:49.985 00:26:15 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:10:53.270 00:10:53.270 real 0m9.700s 00:10:53.270 user 0m3.312s 00:10:53.270 sys 0m0.237s 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:10:53.270 00:26:18 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:10:53.270 ************************************ 00:10:53.270 END TEST accel_decomp_mthread 00:10:53.270 ************************************ 00:10:53.270 00:26:18 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:53.270 00:26:18 accel -- common/autotest_common.sh@1098 -- # '[' 13 -le 1 ']' 00:10:53.270 00:26:18 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:10:53.270 00:26:18 accel -- common/autotest_common.sh@10 -- # set +x 00:10:53.270 ************************************ 00:10:53.270 START TEST accel_decomp_full_mthread 00:10:53.270 ************************************ 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1122 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:10:53.270 00:26:18 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:10:53.271 [2024-05-15 00:26:18.829351] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:10:53.271 [2024-05-15 00:26:18.829415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1853619 ] 00:10:53.271 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.271 [2024-05-15 00:26:18.915075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.271 [2024-05-15 00:26:19.014461] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.271 [2024-05-15 00:26:19.018945] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:53.271 [2024-05-15 00:26:19.026925] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=iaa 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=iaa 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:10:59.910 00:26:25 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:11:02.444 00:11:02.444 real 0m9.645s 00:11:02.444 user 0m3.276s 00:11:02.444 sys 0m0.204s 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:02.444 00:26:28 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:11:02.444 ************************************ 00:11:02.444 END TEST accel_decomp_full_mthread 00:11:02.444 ************************************ 00:11:02.444 00:26:28 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:11:02.444 00:26:28 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:02.444 00:26:28 accel -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:11:02.444 00:26:28 accel -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:02.444 00:26:28 accel -- common/autotest_common.sh@10 -- # set +x 00:11:02.444 00:26:28 accel -- accel/accel.sh@137 -- # build_accel_config 00:11:02.444 00:26:28 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:11:02.444 00:26:28 accel -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:11:02.444 00:26:28 accel -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:11:02.444 00:26:28 accel -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:11:02.444 00:26:28 accel -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:11:02.444 00:26:28 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:11:02.444 00:26:28 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:11:02.444 00:26:28 accel -- accel/accel.sh@40 -- # local IFS=, 00:11:02.444 00:26:28 accel -- accel/accel.sh@41 -- # jq -r . 00:11:02.444 ************************************ 00:11:02.444 START TEST accel_dif_functional_tests 00:11:02.444 ************************************ 00:11:02.444 00:26:28 accel.accel_dif_functional_tests -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:11:02.444 [2024-05-15 00:26:28.570640] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:11:02.444 [2024-05-15 00:26:28.570741] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1855440 ] 00:11:02.703 EAL: No free 2048 kB hugepages reported on node 1 00:11:02.703 [2024-05-15 00:26:28.686644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:02.703 [2024-05-15 00:26:28.787252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.703 [2024-05-15 00:26:28.787334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.703 [2024-05-15 00:26:28.787338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:02.703 [2024-05-15 00:26:28.792245] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:02.703 [2024-05-15 00:26:28.800224] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:10.820 00:11:10.820 00:11:10.820 CUnit - A unit testing framework for C - Version 2.1-3 00:11:10.820 http://cunit.sourceforge.net/ 00:11:10.820 00:11:10.820 00:11:10.820 Suite: accel_dif 00:11:10.820 Test: verify: DIF generated, GUARD check ...passed 00:11:10.820 Test: verify: DIF generated, APPTAG check ...passed 00:11:10.820 Test: verify: DIF generated, REFTAG check ...passed 00:11:10.820 Test: verify: DIF not generated, GUARD check ...[2024-05-15 00:26:36.709368] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:10.820 [2024-05-15 00:26:36.709409] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 00:26:36.709420] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.820 [2024-05-15 00:26:36.709429] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.820 [2024-05-15 00:26:36.709435] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.820 [2024-05-15 00:26:36.709443] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.820 [2024-05-15 00:26:36.709450] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:10.820 [2024-05-15 00:26:36.709458] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:10.820 [2024-05-15 00:26:36.709465] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:10.820 [2024-05-15 00:26:36.709489] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:10.820 [2024-05-15 00:26:36.709497] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=4, offset=0 00:11:10.820 [2024-05-15 00:26:36.709524] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:11:10.820 passed 00:11:10.820 Test: verify: DIF not generated, APPTAG check ...[2024-05-15 00:26:36.709595] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:10.820 [2024-05-15 00:26:36.709604] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 00:26:36.709614] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.820 [2024-05-15 00:26:36.709621] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.820 [2024-05-15 00:26:36.709629] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.820 [2024-05-15 00:26:36.709637] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.820 [2024-05-15 00:26:36.709645] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:10.820 [2024-05-15 00:26:36.709651] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:10.821 [2024-05-15 00:26:36.709658] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:10.821 [2024-05-15 00:26:36.709667] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:10.821 [2024-05-15 00:26:36.709675] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:11:10.821 [2024-05-15 00:26:36.709693] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:11:10.821 passed 00:11:10.821 Test: verify: DIF not generated, REFTAG check ...[2024-05-15 00:26:36.709727] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:10.821 [2024-05-15 00:26:36.709737] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 00:26:36.709743] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.709751] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.709757] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.709765] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.709771] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:10.821 [2024-05-15 00:26:36.709781] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:10.821 [2024-05-15 00:26:36.709787] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:10.821 [2024-05-15 00:26:36.709797] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:10.821 [2024-05-15 00:26:36.709806] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:11:10.821 [2024-05-15 00:26:36.709825] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:11:10.821 passed 00:11:10.821 Test: verify: APPTAG correct, APPTAG check ...passed 00:11:10.821 Test: verify: APPTAG incorrect, APPTAG check ...[2024-05-15 00:26:36.709899] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:10.821 [2024-05-15 00:26:36.709907] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 00:26:36.709916] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.709922] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.709928] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.709935] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.709942] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:10.821 [2024-05-15 00:26:36.709948] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:10.821 [2024-05-15 00:26:36.709956] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:10.821 [2024-05-15 00:26:36.709965] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:11:10.821 [2024-05-15 00:26:36.709973] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:11:10.821 passed 00:11:10.821 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:11:10.821 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:11:10.821 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:11:10.821 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-05-15 00:26:36.710133] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:10.821 [2024-05-15 00:26:36.710143] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 00:26:36.710153] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.710160] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.710166] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.710173] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.710182] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:10.821 [2024-05-15 00:26:36.710190] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:10.821 [2024-05-15 00:26:36.710196] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:10.821 [2024-05-15 00:26:36.710204] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:11:10.821 [2024-05-15 00:26:36.710209] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-05-15 00:26:36.710217] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.710223] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.710230] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.710236] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.710243] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:10.821 [2024-05-15 00:26:36.710249] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:10.821 [2024-05-15 00:26:36.710258] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:10.821 [2024-05-15 00:26:36.710266] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:11:10.821 [2024-05-15 00:26:36.710275] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:11:10.821 [2024-05-15 00:26:36.710284] idxd.c:1806:spdk_idxd_process_events: *ERROR*: Completion status 0x5 00:11:10.821 passed[2024-05-15 00:26:36.710293] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw: 00:11:10.821 Test: generate copy: DIF generated, GUARD check ...[2024-05-15 00:26:36.710301] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.710309] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.710314] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.710322] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:11:10.821 [2024-05-15 00:26:36.710327] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:11:10.821 [2024-05-15 00:26:36.710334] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:11:10.821 [2024-05-15 00:26:36.710341] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:11:10.821 passed 00:11:10.821 Test: generate copy: DIF generated, APTTAG check ...passed 00:11:10.821 Test: generate copy: DIF generated, REFTAG check ...passed 00:11:10.821 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-05-15 00:26:36.710476] idxd.c:1565:idxd_validate_dif_insert_params: *ERROR*: Guard check flag must be set. 00:11:10.821 passed 00:11:10.821 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-05-15 00:26:36.710510] idxd.c:1570:idxd_validate_dif_insert_params: *ERROR*: Application Tag check flag must be set. 00:11:10.821 passed 00:11:10.821 Test: generate copy: DIF generated, no REFTAG check flag set ...[2024-05-15 00:26:36.710548] idxd.c:1575:idxd_validate_dif_insert_params: *ERROR*: Reference Tag check flag must be set. 00:11:10.821 passed 00:11:10.821 Test: generate copy: iovecs-len validate ...[2024-05-15 00:26:36.710590] idxd.c:1602:idxd_validate_dif_insert_iovecs: *ERROR*: Invalid length of data in src (4096) and dst (4176) in iovecs[0]. 00:11:10.821 passed 00:11:10.821 Test: generate copy: buffer alignment validate ...passed 00:11:10.821 00:11:10.821 Run Summary: Type Total Ran Passed Failed Inactive 00:11:10.821 suites 1 1 n/a 0 0 00:11:10.821 tests 20 20 20 0 0 00:11:10.821 asserts 204 204 204 0 n/a 00:11:10.821 00:11:10.821 Elapsed time = 0.005 seconds 00:11:14.106 00:11:14.106 real 0m11.042s 00:11:14.106 user 0m22.051s 00:11:14.106 sys 0m0.299s 00:11:14.107 00:26:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:14.107 00:26:39 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:11:14.107 ************************************ 00:11:14.107 END TEST accel_dif_functional_tests 00:11:14.107 ************************************ 00:11:14.107 00:11:14.107 real 3m56.701s 00:11:14.107 user 2m33.664s 00:11:14.107 sys 0m7.460s 00:11:14.107 00:26:39 accel -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:14.107 00:26:39 accel -- common/autotest_common.sh@10 -- # set +x 00:11:14.107 ************************************ 00:11:14.107 END TEST accel 00:11:14.107 ************************************ 00:11:14.107 00:26:39 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:11:14.107 00:26:39 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:14.107 00:26:39 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:14.107 00:26:39 -- common/autotest_common.sh@10 -- # set +x 00:11:14.107 ************************************ 00:11:14.107 START TEST accel_rpc 00:11:14.107 ************************************ 00:11:14.107 00:26:39 accel_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:11:14.107 * Looking for test storage... 00:11:14.107 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:11:14.107 00:26:39 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:11:14.107 00:26:39 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1857807 00:11:14.107 00:26:39 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1857807 00:11:14.107 00:26:39 accel_rpc -- common/autotest_common.sh@828 -- # '[' -z 1857807 ']' 00:11:14.107 00:26:39 accel_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.107 00:26:39 accel_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:14.107 00:26:39 accel_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.107 00:26:39 accel_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:14.107 00:26:39 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.107 00:26:39 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:11:14.107 [2024-05-15 00:26:39.761245] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:11:14.107 [2024-05-15 00:26:39.761319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1857807 ] 00:11:14.107 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.107 [2024-05-15 00:26:39.844378] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:14.107 [2024-05-15 00:26:39.940981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:14.674 00:26:40 accel_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:14.674 00:26:40 accel_rpc -- common/autotest_common.sh@861 -- # return 0 00:11:14.674 00:26:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:11:14.674 00:26:40 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 1 -gt 0 ]] 00:11:14.674 00:26:40 accel_rpc -- accel/accel_rpc.sh@46 -- # run_test accel_scan_dsa_modules accel_scan_dsa_modules_test_suite 00:11:14.674 00:26:40 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:14.674 00:26:40 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:14.674 00:26:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.674 ************************************ 00:11:14.674 START TEST accel_scan_dsa_modules 00:11:14.674 ************************************ 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@1122 -- # accel_scan_dsa_modules_test_suite 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- accel/accel_rpc.sh@21 -- # rpc_cmd dsa_scan_accel_module 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@10 -- # set +x 00:11:14.674 [2024-05-15 00:26:40.585500] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- accel/accel_rpc.sh@22 -- # NOT rpc_cmd dsa_scan_accel_module 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@649 -- # local es=0 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd dsa_scan_accel_module 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@652 -- # rpc_cmd dsa_scan_accel_module 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@10 -- # set +x 00:11:14.674 request: 00:11:14.674 { 00:11:14.674 "method": "dsa_scan_accel_module", 00:11:14.674 "req_id": 1 00:11:14.674 } 00:11:14.674 Got JSON-RPC error response 00:11:14.674 response: 00:11:14.674 { 00:11:14.674 "code": -114, 00:11:14.674 "message": "Operation already in progress" 00:11:14.674 } 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@652 -- # es=1 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:14.674 00:11:14.674 real 0m0.024s 00:11:14.674 user 0m0.004s 00:11:14.674 sys 0m0.002s 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:14.674 00:26:40 accel_rpc.accel_scan_dsa_modules -- common/autotest_common.sh@10 -- # set +x 00:11:14.674 ************************************ 00:11:14.674 END TEST accel_scan_dsa_modules 00:11:14.674 ************************************ 00:11:14.674 00:26:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:11:14.674 00:26:40 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 1 -gt 0 ]] 00:11:14.674 00:26:40 accel_rpc -- accel/accel_rpc.sh@50 -- # run_test accel_scan_iaa_modules accel_scan_iaa_modules_test_suite 00:11:14.674 00:26:40 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:14.674 00:26:40 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:14.674 00:26:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.674 ************************************ 00:11:14.674 START TEST accel_scan_iaa_modules 00:11:14.674 ************************************ 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@1122 -- # accel_scan_iaa_modules_test_suite 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- accel/accel_rpc.sh@29 -- # rpc_cmd iaa_scan_accel_module 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@10 -- # set +x 00:11:14.674 [2024-05-15 00:26:40.673479] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- accel/accel_rpc.sh@30 -- # NOT rpc_cmd iaa_scan_accel_module 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@649 -- # local es=0 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd iaa_scan_accel_module 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@652 -- # rpc_cmd iaa_scan_accel_module 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@10 -- # set +x 00:11:14.674 request: 00:11:14.674 { 00:11:14.674 "method": "iaa_scan_accel_module", 00:11:14.674 "req_id": 1 00:11:14.674 } 00:11:14.674 Got JSON-RPC error response 00:11:14.674 response: 00:11:14.674 { 00:11:14.674 "code": -114, 00:11:14.674 "message": "Operation already in progress" 00:11:14.674 } 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@652 -- # es=1 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:14.674 00:11:14.674 real 0m0.024s 00:11:14.674 user 0m0.008s 00:11:14.674 sys 0m0.001s 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:14.674 00:26:40 accel_rpc.accel_scan_iaa_modules -- common/autotest_common.sh@10 -- # set +x 00:11:14.674 ************************************ 00:11:14.674 END TEST accel_scan_iaa_modules 00:11:14.674 ************************************ 00:11:14.674 00:26:40 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:11:14.674 00:26:40 accel_rpc -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:14.674 00:26:40 accel_rpc -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:14.674 00:26:40 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.674 ************************************ 00:11:14.674 START TEST accel_assign_opcode 00:11:14.674 ************************************ 00:11:14.674 00:26:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1122 -- # accel_assign_opcode_test_suite 00:11:14.674 00:26:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:11:14.674 00:26:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.674 00:26:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:14.674 [2024-05-15 00:26:40.765542] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:11:14.674 00:26:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:14.674 00:26:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:11:14.674 00:26:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.674 00:26:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:14.674 [2024-05-15 00:26:40.773518] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:11:14.674 00:26:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:14.674 00:26:40 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:11:14.674 00:26:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:14.674 00:26:40 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:22.832 00:26:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:22.832 00:26:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:11:22.832 00:26:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:22.832 00:26:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:22.832 00:26:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:11:22.832 00:26:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:11:22.832 00:26:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:22.832 software 00:11:22.832 00:11:22.832 real 0m8.173s 00:11:22.832 user 0m0.033s 00:11:22.832 sys 0m0.009s 00:11:22.832 00:26:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:22.832 00:26:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:11:22.832 ************************************ 00:11:22.832 END TEST accel_assign_opcode 00:11:22.832 ************************************ 00:11:22.832 00:26:48 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1857807 00:11:22.832 00:26:48 accel_rpc -- common/autotest_common.sh@947 -- # '[' -z 1857807 ']' 00:11:22.832 00:26:48 accel_rpc -- common/autotest_common.sh@951 -- # kill -0 1857807 00:11:22.832 00:26:48 accel_rpc -- common/autotest_common.sh@952 -- # uname 00:11:22.832 00:26:48 accel_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:22.832 00:26:48 accel_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1857807 00:11:23.093 00:26:48 accel_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:23.093 00:26:48 accel_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:23.093 00:26:48 accel_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1857807' 00:11:23.093 killing process with pid 1857807 00:11:23.093 00:26:48 accel_rpc -- common/autotest_common.sh@966 -- # kill 1857807 00:11:23.093 00:26:49 accel_rpc -- common/autotest_common.sh@971 -- # wait 1857807 00:11:26.377 00:11:26.377 real 0m12.696s 00:11:26.377 user 0m4.290s 00:11:26.377 sys 0m0.648s 00:11:26.377 00:26:52 accel_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:26.377 00:26:52 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:26.377 ************************************ 00:11:26.377 END TEST accel_rpc 00:11:26.377 ************************************ 00:11:26.377 00:26:52 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:11:26.377 00:26:52 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:26.377 00:26:52 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:26.377 00:26:52 -- common/autotest_common.sh@10 -- # set +x 00:11:26.377 ************************************ 00:11:26.377 START TEST app_cmdline 00:11:26.377 ************************************ 00:11:26.377 00:26:52 app_cmdline -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:11:26.377 * Looking for test storage... 00:11:26.377 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:11:26.377 00:26:52 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:11:26.377 00:26:52 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1860370 00:11:26.377 00:26:52 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1860370 00:11:26.377 00:26:52 app_cmdline -- common/autotest_common.sh@828 -- # '[' -z 1860370 ']' 00:11:26.377 00:26:52 app_cmdline -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:26.377 00:26:52 app_cmdline -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:26.377 00:26:52 app_cmdline -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:26.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:26.377 00:26:52 app_cmdline -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:26.377 00:26:52 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:11:26.377 00:26:52 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:26.377 [2024-05-15 00:26:52.535347] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:11:26.377 [2024-05-15 00:26:52.535459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1860370 ] 00:11:26.634 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.634 [2024-05-15 00:26:52.646091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.634 [2024-05-15 00:26:52.736731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:27.202 00:26:53 app_cmdline -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:27.202 00:26:53 app_cmdline -- common/autotest_common.sh@861 -- # return 0 00:11:27.202 00:26:53 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:11:27.461 { 00:11:27.461 "version": "SPDK v24.05-pre git sha1 68960dff2", 00:11:27.461 "fields": { 00:11:27.461 "major": 24, 00:11:27.461 "minor": 5, 00:11:27.461 "patch": 0, 00:11:27.461 "suffix": "-pre", 00:11:27.461 "commit": "68960dff2" 00:11:27.461 } 00:11:27.461 } 00:11:27.461 00:26:53 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:11:27.461 00:26:53 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:11:27.461 00:26:53 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:11:27.461 00:26:53 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:11:27.461 00:26:53 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:27.461 00:26:53 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:11:27.461 00:26:53 app_cmdline -- app/cmdline.sh@26 -- # sort 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:27.461 00:26:53 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:11:27.461 00:26:53 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:11:27.461 00:26:53 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@649 -- # local es=0 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:11:27.461 00:26:53 app_cmdline -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:11:27.461 request: 00:11:27.461 { 00:11:27.461 "method": "env_dpdk_get_mem_stats", 00:11:27.461 "req_id": 1 00:11:27.461 } 00:11:27.461 Got JSON-RPC error response 00:11:27.461 response: 00:11:27.461 { 00:11:27.461 "code": -32601, 00:11:27.461 "message": "Method not found" 00:11:27.461 } 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@652 -- # es=1 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:11:27.718 00:26:53 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1860370 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@947 -- # '[' -z 1860370 ']' 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@951 -- # kill -0 1860370 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@952 -- # uname 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1860370 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1860370' 00:11:27.718 killing process with pid 1860370 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@966 -- # kill 1860370 00:11:27.718 00:26:53 app_cmdline -- common/autotest_common.sh@971 -- # wait 1860370 00:11:28.652 00:11:28.652 real 0m2.124s 00:11:28.652 user 0m2.351s 00:11:28.652 sys 0m0.465s 00:11:28.652 00:26:54 app_cmdline -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:28.652 00:26:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:11:28.652 ************************************ 00:11:28.652 END TEST app_cmdline 00:11:28.652 ************************************ 00:11:28.652 00:26:54 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:11:28.652 00:26:54 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:11:28.652 00:26:54 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:28.652 00:26:54 -- common/autotest_common.sh@10 -- # set +x 00:11:28.652 ************************************ 00:11:28.652 START TEST version 00:11:28.652 ************************************ 00:11:28.652 00:26:54 version -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:11:28.652 * Looking for test storage... 00:11:28.652 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:11:28.652 00:26:54 version -- app/version.sh@17 -- # get_header_version major 00:11:28.652 00:26:54 version -- app/version.sh@14 -- # cut -f2 00:11:28.652 00:26:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:11:28.652 00:26:54 version -- app/version.sh@14 -- # tr -d '"' 00:11:28.652 00:26:54 version -- app/version.sh@17 -- # major=24 00:11:28.652 00:26:54 version -- app/version.sh@18 -- # get_header_version minor 00:11:28.652 00:26:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:11:28.652 00:26:54 version -- app/version.sh@14 -- # cut -f2 00:11:28.652 00:26:54 version -- app/version.sh@14 -- # tr -d '"' 00:11:28.652 00:26:54 version -- app/version.sh@18 -- # minor=5 00:11:28.652 00:26:54 version -- app/version.sh@19 -- # get_header_version patch 00:11:28.652 00:26:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:11:28.652 00:26:54 version -- app/version.sh@14 -- # tr -d '"' 00:11:28.652 00:26:54 version -- app/version.sh@14 -- # cut -f2 00:11:28.652 00:26:54 version -- app/version.sh@19 -- # patch=0 00:11:28.652 00:26:54 version -- app/version.sh@20 -- # get_header_version suffix 00:11:28.652 00:26:54 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:11:28.652 00:26:54 version -- app/version.sh@14 -- # cut -f2 00:11:28.652 00:26:54 version -- app/version.sh@14 -- # tr -d '"' 00:11:28.652 00:26:54 version -- app/version.sh@20 -- # suffix=-pre 00:11:28.652 00:26:54 version -- app/version.sh@22 -- # version=24.5 00:11:28.652 00:26:54 version -- app/version.sh@25 -- # (( patch != 0 )) 00:11:28.652 00:26:54 version -- app/version.sh@28 -- # version=24.5rc0 00:11:28.652 00:26:54 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:11:28.652 00:26:54 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:11:28.652 00:26:54 version -- app/version.sh@30 -- # py_version=24.5rc0 00:11:28.652 00:26:54 version -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:11:28.652 00:11:28.652 real 0m0.134s 00:11:28.652 user 0m0.075s 00:11:28.652 sys 0m0.090s 00:11:28.652 00:26:54 version -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:28.652 00:26:54 version -- common/autotest_common.sh@10 -- # set +x 00:11:28.652 ************************************ 00:11:28.652 END TEST version 00:11:28.652 ************************************ 00:11:28.652 00:26:54 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:11:28.652 00:26:54 -- spdk/autotest.sh@194 -- # uname -s 00:11:28.652 00:26:54 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:11:28.652 00:26:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:28.652 00:26:54 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:11:28.652 00:26:54 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:11:28.652 00:26:54 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:11:28.652 00:26:54 -- spdk/autotest.sh@256 -- # timing_exit lib 00:11:28.652 00:26:54 -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:28.652 00:26:54 -- common/autotest_common.sh@10 -- # set +x 00:11:28.652 00:26:54 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:11:28.652 00:26:54 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:11:28.652 00:26:54 -- spdk/autotest.sh@275 -- # '[' 1 -eq 1 ']' 00:11:28.652 00:26:54 -- spdk/autotest.sh@276 -- # export NET_TYPE 00:11:28.652 00:26:54 -- spdk/autotest.sh@279 -- # '[' tcp = rdma ']' 00:11:28.652 00:26:54 -- spdk/autotest.sh@282 -- # '[' tcp = tcp ']' 00:11:28.652 00:26:54 -- spdk/autotest.sh@283 -- # run_test nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:28.652 00:26:54 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:28.652 00:26:54 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:28.652 00:26:54 -- common/autotest_common.sh@10 -- # set +x 00:11:28.652 ************************************ 00:11:28.652 START TEST nvmf_tcp 00:11:28.652 ************************************ 00:11:28.652 00:26:54 nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:11:28.912 * Looking for test storage... 00:11:28.912 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:28.912 00:26:54 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.912 00:26:54 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.912 00:26:54 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.912 00:26:54 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.912 00:26:54 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.912 00:26:54 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.912 00:26:54 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:11:28.912 00:26:54 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.912 00:26:54 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.913 00:26:54 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:28.913 00:26:54 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:11:28.913 00:26:54 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:11:28.913 00:26:54 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:28.913 00:26:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:28.913 00:26:54 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:11:28.913 00:26:54 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:28.913 00:26:54 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:28.913 00:26:54 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:28.913 00:26:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:28.913 ************************************ 00:11:28.913 START TEST nvmf_example 00:11:28.913 ************************************ 00:11:28.913 00:26:54 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:28.913 * Looking for test storage... 00:11:28.913 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:28.913 00:26:55 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:11:34.182 Found 0000:27:00.0 (0x8086 - 0x159b) 00:11:34.182 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:11:34.183 Found 0000:27:00.1 (0x8086 - 0x159b) 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:11:34.183 Found net devices under 0000:27:00.0: cvl_0_0 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:11:34.183 Found net devices under 0000:27:00.1: cvl_0_1 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:34.183 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:34.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:34.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:11:34.443 00:11:34.443 --- 10.0.0.2 ping statistics --- 00:11:34.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.443 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:34.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:34.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:11:34.443 00:11:34.443 --- 10.0.0.1 ping statistics --- 00:11:34.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:34.443 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:34.443 00:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1864396 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1864396 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@828 -- # '[' -z 1864396 ']' 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:34.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:34.444 00:27:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:34.715 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@861 -- # return 0 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:35.281 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:35.540 00:27:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:35.540 00:27:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:35.540 00:27:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:35.540 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.633 Initializing NVMe Controllers 00:11:45.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:45.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:45.633 Initialization complete. Launching workers. 00:11:45.633 ======================================================== 00:11:45.633 Latency(us) 00:11:45.633 Device Information : IOPS MiB/s Average min max 00:11:45.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18469.00 72.14 3465.95 685.00 15406.62 00:11:45.633 ======================================================== 00:11:45.633 Total : 18469.00 72.14 3465.95 685.00 15406.62 00:11:45.633 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:45.633 rmmod nvme_tcp 00:11:45.633 rmmod nvme_fabrics 00:11:45.633 rmmod nvme_keyring 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1864396 ']' 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1864396 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@947 -- # '[' -z 1864396 ']' 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@951 -- # kill -0 1864396 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # uname 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1864396 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # process_name=nvmf 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@957 -- # '[' nvmf = sudo ']' 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1864396' 00:11:45.633 killing process with pid 1864396 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # kill 1864396 00:11:45.633 00:27:11 nvmf_tcp.nvmf_example -- common/autotest_common.sh@971 -- # wait 1864396 00:11:46.199 nvmf threads initialize successfully 00:11:46.199 bdev subsystem init successfully 00:11:46.199 created a nvmf target service 00:11:46.199 create targets's poll groups done 00:11:46.199 all subsystems of target started 00:11:46.199 nvmf target is running 00:11:46.199 all subsystems of target stopped 00:11:46.199 destroy targets's poll groups done 00:11:46.199 destroyed the nvmf target service 00:11:46.199 bdev subsystem finish successfully 00:11:46.199 nvmf threads destroy successfully 00:11:46.199 00:27:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:46.199 00:27:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:46.199 00:27:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:46.199 00:27:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:46.199 00:27:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:46.199 00:27:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.200 00:27:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:46.200 00:27:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.730 00:27:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:48.730 00:27:14 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:48.730 00:27:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:48.731 00:27:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.731 00:11:48.731 real 0m19.366s 00:11:48.731 user 0m46.145s 00:11:48.731 sys 0m5.168s 00:11:48.731 00:27:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # xtrace_disable 00:11:48.731 00:27:14 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.731 ************************************ 00:11:48.731 END TEST nvmf_example 00:11:48.731 ************************************ 00:11:48.731 00:27:14 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:48.731 00:27:14 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:48.731 00:27:14 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:48.731 00:27:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:48.731 ************************************ 00:11:48.731 START TEST nvmf_filesystem 00:11:48.731 ************************************ 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:48.731 * Looking for test storage... 00:11:48.731 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/dsa-phy-autotest/spdk/../output ']' 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:48.731 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/config.h ]] 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:48.732 #define SPDK_CONFIG_H 00:11:48.732 #define SPDK_CONFIG_APPS 1 00:11:48.732 #define SPDK_CONFIG_ARCH native 00:11:48.732 #define SPDK_CONFIG_ASAN 1 00:11:48.732 #undef SPDK_CONFIG_AVAHI 00:11:48.732 #undef SPDK_CONFIG_CET 00:11:48.732 #define SPDK_CONFIG_COVERAGE 1 00:11:48.732 #define SPDK_CONFIG_CROSS_PREFIX 00:11:48.732 #undef SPDK_CONFIG_CRYPTO 00:11:48.732 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:48.732 #undef SPDK_CONFIG_CUSTOMOCF 00:11:48.732 #undef SPDK_CONFIG_DAOS 00:11:48.732 #define SPDK_CONFIG_DAOS_DIR 00:11:48.732 #define SPDK_CONFIG_DEBUG 1 00:11:48.732 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:48.732 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:11:48.732 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:48.732 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:48.732 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:48.732 #undef SPDK_CONFIG_DPDK_UADK 00:11:48.732 #define SPDK_CONFIG_ENV /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:11:48.732 #define SPDK_CONFIG_EXAMPLES 1 00:11:48.732 #undef SPDK_CONFIG_FC 00:11:48.732 #define SPDK_CONFIG_FC_PATH 00:11:48.732 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:48.732 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:48.732 #undef SPDK_CONFIG_FUSE 00:11:48.732 #undef SPDK_CONFIG_FUZZER 00:11:48.732 #define SPDK_CONFIG_FUZZER_LIB 00:11:48.732 #undef SPDK_CONFIG_GOLANG 00:11:48.732 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:48.732 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:48.732 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:48.732 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:11:48.732 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:48.732 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:48.732 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:48.732 #define SPDK_CONFIG_IDXD 1 00:11:48.732 #undef SPDK_CONFIG_IDXD_KERNEL 00:11:48.732 #undef SPDK_CONFIG_IPSEC_MB 00:11:48.732 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:48.732 #define SPDK_CONFIG_ISAL 1 00:11:48.732 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:48.732 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:48.732 #define SPDK_CONFIG_LIBDIR 00:11:48.732 #undef SPDK_CONFIG_LTO 00:11:48.732 #define SPDK_CONFIG_MAX_LCORES 00:11:48.732 #define SPDK_CONFIG_NVME_CUSE 1 00:11:48.732 #undef SPDK_CONFIG_OCF 00:11:48.732 #define SPDK_CONFIG_OCF_PATH 00:11:48.732 #define SPDK_CONFIG_OPENSSL_PATH 00:11:48.732 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:48.732 #define SPDK_CONFIG_PGO_DIR 00:11:48.732 #undef SPDK_CONFIG_PGO_USE 00:11:48.732 #define SPDK_CONFIG_PREFIX /usr/local 00:11:48.732 #undef SPDK_CONFIG_RAID5F 00:11:48.732 #undef SPDK_CONFIG_RBD 00:11:48.732 #define SPDK_CONFIG_RDMA 1 00:11:48.732 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:48.732 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:48.732 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:48.732 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:48.732 #define SPDK_CONFIG_SHARED 1 00:11:48.732 #undef SPDK_CONFIG_SMA 00:11:48.732 #define SPDK_CONFIG_TESTS 1 00:11:48.732 #undef SPDK_CONFIG_TSAN 00:11:48.732 #define SPDK_CONFIG_UBLK 1 00:11:48.732 #define SPDK_CONFIG_UBSAN 1 00:11:48.732 #undef SPDK_CONFIG_UNIT_TESTS 00:11:48.732 #undef SPDK_CONFIG_URING 00:11:48.732 #define SPDK_CONFIG_URING_PATH 00:11:48.732 #undef SPDK_CONFIG_URING_ZNS 00:11:48.732 #undef SPDK_CONFIG_USDT 00:11:48.732 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:48.732 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:48.732 #undef SPDK_CONFIG_VFIO_USER 00:11:48.732 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:48.732 #define SPDK_CONFIG_VHOST 1 00:11:48.732 #define SPDK_CONFIG_VIRTIO 1 00:11:48.732 #undef SPDK_CONFIG_VTUNE 00:11:48.732 #define SPDK_CONFIG_VTUNE_DIR 00:11:48.732 #define SPDK_CONFIG_WERROR 1 00:11:48.732 #define SPDK_CONFIG_WPDK_DIR 00:11:48.732 #undef SPDK_CONFIG_XNVME 00:11:48.732 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/dsa-phy-autotest/spdk/.run_test_name 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power ]] 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:48.732 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 1 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 1 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 1 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:48.733 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j128 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1867716 ]] 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1867716 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1677 -- # set_test_storage 2147483648 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.3S5sGR 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target /tmp/spdk.3S5sGR/tests/target /tmp/spdk.3S5sGR 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=971198464 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4313231360 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=259257720832 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=264763887616 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=5506166784 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=132377231360 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=132381941760 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=52943101952 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=52952780800 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9678848 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=197632 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=306176 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=132381581312 00:11:48.734 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=132381945856 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=364544 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=26476384256 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=26476388352 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:11:48.735 * Looking for test storage... 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=259257720832 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=7720759296 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:48.735 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1679 -- # set -o errtrace 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # shopt -s extdebug 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1681 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # PS4=' \t $test_domain -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # true 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # xtrace_fd 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:48.735 00:27:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:48.736 00:27:14 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:48.736 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:48.736 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.736 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:48.736 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:48.736 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:48.736 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.736 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.736 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.736 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:11:48.736 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:48.736 00:27:14 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:48.736 00:27:14 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:54.003 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:11:54.004 Found 0000:27:00.0 (0x8086 - 0x159b) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:11:54.004 Found 0000:27:00.1 (0x8086 - 0x159b) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:11:54.004 Found net devices under 0000:27:00.0: cvl_0_0 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:11:54.004 Found net devices under 0000:27:00.1: cvl_0_1 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:54.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:11:54.004 00:11:54.004 --- 10.0.0.2 ping statistics --- 00:11:54.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.004 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:54.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:11:54.004 00:11:54.004 --- 10.0.0.1 ping statistics --- 00:11:54.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.004 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:54.004 ************************************ 00:11:54.004 START TEST nvmf_filesystem_no_in_capsule 00:11:54.004 ************************************ 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 0 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1871230 00:11:54.004 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1871230 00:11:54.005 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 1871230 ']' 00:11:54.005 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.005 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:11:54.005 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.005 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:11:54.005 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.005 00:27:19 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.005 [2024-05-15 00:27:19.938995] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:11:54.005 [2024-05-15 00:27:19.939101] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.005 EAL: No free 2048 kB hugepages reported on node 1 00:11:54.005 [2024-05-15 00:27:20.069937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.265 [2024-05-15 00:27:20.174429] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.265 [2024-05-15 00:27:20.174469] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.265 [2024-05-15 00:27:20.174479] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.265 [2024-05-15 00:27:20.174489] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.265 [2024-05-15 00:27:20.174497] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.265 [2024-05-15 00:27:20.174626] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.265 [2024-05-15 00:27:20.174683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.265 [2024-05-15 00:27:20.174792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.265 [2024-05-15 00:27:20.174799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.523 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:11:54.523 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:11:54.523 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:54.523 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:11:54.523 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.782 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.782 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:54.782 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:54.782 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:54.782 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:54.782 [2024-05-15 00:27:20.697224] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:54.782 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:54.782 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:54.782 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:54.782 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.042 Malloc1 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.042 [2024-05-15 00:27:20.974963] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:11:55.042 [2024-05-15 00:27:20.975254] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:11:55.042 00:27:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:11:55.042 { 00:11:55.042 "name": "Malloc1", 00:11:55.042 "aliases": [ 00:11:55.042 "4f1ddca9-71d5-4d7d-86d1-079a79d2565d" 00:11:55.042 ], 00:11:55.042 "product_name": "Malloc disk", 00:11:55.042 "block_size": 512, 00:11:55.042 "num_blocks": 1048576, 00:11:55.042 "uuid": "4f1ddca9-71d5-4d7d-86d1-079a79d2565d", 00:11:55.042 "assigned_rate_limits": { 00:11:55.042 "rw_ios_per_sec": 0, 00:11:55.042 "rw_mbytes_per_sec": 0, 00:11:55.042 "r_mbytes_per_sec": 0, 00:11:55.042 "w_mbytes_per_sec": 0 00:11:55.042 }, 00:11:55.042 "claimed": true, 00:11:55.042 "claim_type": "exclusive_write", 00:11:55.042 "zoned": false, 00:11:55.042 "supported_io_types": { 00:11:55.042 "read": true, 00:11:55.042 "write": true, 00:11:55.042 "unmap": true, 00:11:55.042 "write_zeroes": true, 00:11:55.042 "flush": true, 00:11:55.042 "reset": true, 00:11:55.042 "compare": false, 00:11:55.042 "compare_and_write": false, 00:11:55.042 "abort": true, 00:11:55.042 "nvme_admin": false, 00:11:55.042 "nvme_io": false 00:11:55.042 }, 00:11:55.042 "memory_domains": [ 00:11:55.042 { 00:11:55.042 "dma_device_id": "system", 00:11:55.042 "dma_device_type": 1 00:11:55.042 }, 00:11:55.042 { 00:11:55.042 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.042 "dma_device_type": 2 00:11:55.042 } 00:11:55.042 ], 00:11:55.042 "driver_specific": {} 00:11:55.042 } 00:11:55.042 ]' 00:11:55.042 00:27:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:11:55.042 00:27:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:11:55.042 00:27:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:11:55.042 00:27:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:11:55.042 00:27:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:11:55.042 00:27:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:11:55.042 00:27:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:55.042 00:27:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:56.428 00:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:56.428 00:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:11:56.428 00:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:11:56.428 00:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:11:56.428 00:27:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:58.968 00:27:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:59.227 00:27:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.608 ************************************ 00:12:00.608 START TEST filesystem_ext4 00:12:00.608 ************************************ 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local force 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:12:00.608 00:27:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:00.608 mke2fs 1.46.5 (30-Dec-2021) 00:12:00.608 Discarding device blocks: 0/522240 done 00:12:00.608 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:00.608 Filesystem UUID: e072c698-cebc-49af-8ff2-b52d4b68821a 00:12:00.608 Superblock backups stored on blocks: 00:12:00.608 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:00.608 00:12:00.608 Allocating group tables: 0/64 done 00:12:00.608 Writing inode tables: 0/64 done 00:12:00.868 Creating journal (8192 blocks): done 00:12:01.128 Writing superblocks and filesystem accounting information: 0/64 2/64 done 00:12:01.128 00:12:01.128 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@942 -- # return 0 00:12:01.128 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1871230 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:01.388 00:12:01.388 real 0m1.098s 00:12:01.388 user 0m0.022s 00:12:01.388 sys 0m0.067s 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:01.388 ************************************ 00:12:01.388 END TEST filesystem_ext4 00:12:01.388 ************************************ 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:01.388 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.647 ************************************ 00:12:01.647 START TEST filesystem_btrfs 00:12:01.647 ************************************ 00:12:01.647 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:01.647 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:01.647 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:01.647 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:01.647 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:12:01.647 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:12:01.647 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:12:01.647 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local force 00:12:01.647 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:12:01.647 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:12:01.647 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:01.647 btrfs-progs v6.6.2 00:12:01.647 See https://btrfs.readthedocs.io for more information. 00:12:01.647 00:12:01.647 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:01.647 NOTE: several default settings have changed in version 5.15, please make sure 00:12:01.647 this does not affect your deployments: 00:12:01.647 - DUP for metadata (-m dup) 00:12:01.647 - enabled no-holes (-O no-holes) 00:12:01.647 - enabled free-space-tree (-R free-space-tree) 00:12:01.647 00:12:01.647 Label: (null) 00:12:01.647 UUID: 5122112f-7d28-4d0e-9320-4d37b2c3b02a 00:12:01.647 Node size: 16384 00:12:01.647 Sector size: 4096 00:12:01.647 Filesystem size: 510.00MiB 00:12:01.647 Block group profiles: 00:12:01.647 Data: single 8.00MiB 00:12:01.647 Metadata: DUP 32.00MiB 00:12:01.647 System: DUP 8.00MiB 00:12:01.647 SSD detected: yes 00:12:01.647 Zoned device: no 00:12:01.647 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:01.647 Runtime features: free-space-tree 00:12:01.647 Checksum: crc32c 00:12:01.647 Number of devices: 1 00:12:01.647 Devices: 00:12:01.647 ID SIZE PATH 00:12:01.647 1 510.00MiB /dev/nvme0n1p1 00:12:01.647 00:12:01.647 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@942 -- # return 0 00:12:01.647 00:27:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1871230 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:02.587 00:12:02.587 real 0m0.953s 00:12:02.587 user 0m0.025s 00:12:02.587 sys 0m0.144s 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:02.587 ************************************ 00:12:02.587 END TEST filesystem_btrfs 00:12:02.587 ************************************ 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:02.587 ************************************ 00:12:02.587 START TEST filesystem_xfs 00:12:02.587 ************************************ 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local i=0 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local force 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # force=-f 00:12:02.587 00:27:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:02.587 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:02.587 = sectsz=512 attr=2, projid32bit=1 00:12:02.587 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:02.587 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:02.587 data = bsize=4096 blocks=130560, imaxpct=25 00:12:02.587 = sunit=0 swidth=0 blks 00:12:02.587 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:02.587 log =internal log bsize=4096 blocks=16384, version=2 00:12:02.587 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:02.587 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:03.527 Discarding blocks...Done. 00:12:03.527 00:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@942 -- # return 0 00:12:03.527 00:27:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1871230 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:05.436 00:12:05.436 real 0m2.654s 00:12:05.436 user 0m0.028s 00:12:05.436 sys 0m0.074s 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:05.436 ************************************ 00:12:05.436 END TEST filesystem_xfs 00:12:05.436 ************************************ 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:05.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1871230 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 1871230 ']' 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@951 -- # kill -0 1871230 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # uname 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1871230 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1871230' 00:12:05.436 killing process with pid 1871230 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # kill 1871230 00:12:05.436 [2024-05-15 00:27:31.574540] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:05.436 00:27:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # wait 1871230 00:12:06.378 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:06.378 00:12:06.378 real 0m12.669s 00:12:06.378 user 0m48.760s 00:12:06.378 sys 0m1.265s 00:12:06.378 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:06.378 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.378 ************************************ 00:12:06.378 END TEST nvmf_filesystem_no_in_capsule 00:12:06.378 ************************************ 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:06.638 ************************************ 00:12:06.638 START TEST nvmf_filesystem_in_capsule 00:12:06.638 ************************************ 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1122 -- # nvmf_filesystem_part 4096 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1873832 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1873832 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@828 -- # '[' -z 1873832 ']' 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.638 00:27:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.638 [2024-05-15 00:27:32.674259] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:12:06.638 [2024-05-15 00:27:32.674365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.638 EAL: No free 2048 kB hugepages reported on node 1 00:12:06.638 [2024-05-15 00:27:32.791222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.899 [2024-05-15 00:27:32.891843] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.899 [2024-05-15 00:27:32.891883] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.899 [2024-05-15 00:27:32.891894] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.899 [2024-05-15 00:27:32.891904] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.899 [2024-05-15 00:27:32.891912] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.899 [2024-05-15 00:27:32.892033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.899 [2024-05-15 00:27:32.892119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.899 [2024-05-15 00:27:32.892218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.899 [2024-05-15 00:27:32.892229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@861 -- # return 0 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.468 [2024-05-15 00:27:33.427074] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:07.468 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.728 Malloc1 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.728 [2024-05-15 00:27:33.701013] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:07.728 [2024-05-15 00:27:33.701354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1375 -- # local bdev_name=Malloc1 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1376 -- # local bdev_info 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1377 -- # local bs 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local nb 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # bdev_info='[ 00:12:07.728 { 00:12:07.728 "name": "Malloc1", 00:12:07.728 "aliases": [ 00:12:07.728 "236cf888-858a-48ee-be5c-789f31903f72" 00:12:07.728 ], 00:12:07.728 "product_name": "Malloc disk", 00:12:07.728 "block_size": 512, 00:12:07.728 "num_blocks": 1048576, 00:12:07.728 "uuid": "236cf888-858a-48ee-be5c-789f31903f72", 00:12:07.728 "assigned_rate_limits": { 00:12:07.728 "rw_ios_per_sec": 0, 00:12:07.728 "rw_mbytes_per_sec": 0, 00:12:07.728 "r_mbytes_per_sec": 0, 00:12:07.728 "w_mbytes_per_sec": 0 00:12:07.728 }, 00:12:07.728 "claimed": true, 00:12:07.728 "claim_type": "exclusive_write", 00:12:07.728 "zoned": false, 00:12:07.728 "supported_io_types": { 00:12:07.728 "read": true, 00:12:07.728 "write": true, 00:12:07.728 "unmap": true, 00:12:07.728 "write_zeroes": true, 00:12:07.728 "flush": true, 00:12:07.728 "reset": true, 00:12:07.728 "compare": false, 00:12:07.728 "compare_and_write": false, 00:12:07.728 "abort": true, 00:12:07.728 "nvme_admin": false, 00:12:07.728 "nvme_io": false 00:12:07.728 }, 00:12:07.728 "memory_domains": [ 00:12:07.728 { 00:12:07.728 "dma_device_id": "system", 00:12:07.728 "dma_device_type": 1 00:12:07.728 }, 00:12:07.728 { 00:12:07.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.728 "dma_device_type": 2 00:12:07.728 } 00:12:07.728 ], 00:12:07.728 "driver_specific": {} 00:12:07.728 } 00:12:07.728 ]' 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # jq '.[] .block_size' 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # bs=512 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # jq '.[] .num_blocks' 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # nb=1048576 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_size=512 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # echo 512 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:07.728 00:27:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.637 00:27:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:09.637 00:27:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1195 -- # local i=0 00:12:09.637 00:27:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.637 00:27:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:12:09.637 00:27:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # sleep 2 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # return 0 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:11.546 00:27:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:12.484 00:27:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:13.470 ************************************ 00:12:13.470 START TEST filesystem_in_capsule_ext4 00:12:13.470 ************************************ 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@923 -- # local fstype=ext4 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local i=0 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local force 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # '[' ext4 = ext4 ']' 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # force=-F 00:12:13.470 00:27:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:13.470 mke2fs 1.46.5 (30-Dec-2021) 00:12:13.470 Discarding device blocks: 0/522240 done 00:12:13.470 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:13.470 Filesystem UUID: 63f3e8c4-2047-4824-994a-0e218f134ea3 00:12:13.470 Superblock backups stored on blocks: 00:12:13.470 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:13.470 00:12:13.470 Allocating group tables: 0/64 done 00:12:13.470 Writing inode tables: 0/64 done 00:12:16.002 Creating journal (8192 blocks): done 00:12:16.939 Writing superblocks and filesystem accounting information: 0/64 done 00:12:16.939 00:12:16.939 00:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@942 -- # return 0 00:12:16.939 00:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:16.939 00:27:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1873832 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:16.939 00:12:16.939 real 0m3.651s 00:12:16.939 user 0m0.028s 00:12:16.939 sys 0m0.056s 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:16.939 ************************************ 00:12:16.939 END TEST filesystem_in_capsule_ext4 00:12:16.939 ************************************ 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:16.939 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.199 ************************************ 00:12:17.199 START TEST filesystem_in_capsule_btrfs 00:12:17.199 ************************************ 00:12:17.199 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:17.199 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:17.199 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.199 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:17.199 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@923 -- # local fstype=btrfs 00:12:17.199 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:12:17.199 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local i=0 00:12:17.199 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local force 00:12:17.199 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # '[' btrfs = ext4 ']' 00:12:17.199 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # force=-f 00:12:17.199 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:17.199 btrfs-progs v6.6.2 00:12:17.199 See https://btrfs.readthedocs.io for more information. 00:12:17.199 00:12:17.199 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:17.199 NOTE: several default settings have changed in version 5.15, please make sure 00:12:17.199 this does not affect your deployments: 00:12:17.199 - DUP for metadata (-m dup) 00:12:17.199 - enabled no-holes (-O no-holes) 00:12:17.199 - enabled free-space-tree (-R free-space-tree) 00:12:17.199 00:12:17.199 Label: (null) 00:12:17.199 UUID: 5edf39f0-ffef-48a7-9e3a-6a89c363bb8d 00:12:17.199 Node size: 16384 00:12:17.199 Sector size: 4096 00:12:17.199 Filesystem size: 510.00MiB 00:12:17.199 Block group profiles: 00:12:17.199 Data: single 8.00MiB 00:12:17.199 Metadata: DUP 32.00MiB 00:12:17.199 System: DUP 8.00MiB 00:12:17.199 SSD detected: yes 00:12:17.199 Zoned device: no 00:12:17.199 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:17.199 Runtime features: free-space-tree 00:12:17.199 Checksum: crc32c 00:12:17.199 Number of devices: 1 00:12:17.199 Devices: 00:12:17.199 ID SIZE PATH 00:12:17.199 1 510.00MiB /dev/nvme0n1p1 00:12:17.199 00:12:17.199 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@942 -- # return 0 00:12:17.199 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:17.459 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:17.459 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:17.459 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1873832 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:17.720 00:12:17.720 real 0m0.542s 00:12:17.720 user 0m0.022s 00:12:17.720 sys 0m0.131s 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:17.720 ************************************ 00:12:17.720 END TEST filesystem_in_capsule_btrfs 00:12:17.720 ************************************ 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:17.720 ************************************ 00:12:17.720 START TEST filesystem_in_capsule_xfs 00:12:17.720 ************************************ 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1122 -- # nvmf_filesystem_create xfs nvme0n1 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@923 -- # local fstype=xfs 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local dev_name=/dev/nvme0n1p1 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local i=0 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local force 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # '[' xfs = ext4 ']' 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # force=-f 00:12:17.720 00:27:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:17.720 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:17.720 = sectsz=512 attr=2, projid32bit=1 00:12:17.720 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:17.720 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:17.720 data = bsize=4096 blocks=130560, imaxpct=25 00:12:17.720 = sunit=0 swidth=0 blks 00:12:17.720 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:17.720 log =internal log bsize=4096 blocks=16384, version=2 00:12:17.720 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:17.720 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:18.657 Discarding blocks...Done. 00:12:18.657 00:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@942 -- # return 0 00:12:18.657 00:27:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1873832 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:20.561 00:12:20.561 real 0m2.617s 00:12:20.561 user 0m0.020s 00:12:20.561 sys 0m0.075s 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:20.561 ************************************ 00:12:20.561 END TEST filesystem_in_capsule_xfs 00:12:20.561 ************************************ 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:20.561 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1216 -- # local i=0 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1228 -- # return 0 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1873832 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@947 -- # '[' -z 1873832 ']' 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@951 -- # kill -0 1873832 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # uname 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1873832 00:12:20.822 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:21.082 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:21.082 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1873832' 00:12:21.082 killing process with pid 1873832 00:12:21.082 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # kill 1873832 00:12:21.082 [2024-05-15 00:27:46.987796] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:21.082 00:27:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # wait 1873832 00:12:22.017 00:27:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:22.017 00:12:22.017 real 0m15.350s 00:12:22.017 user 0m59.525s 00:12:22.017 sys 0m1.202s 00:12:22.017 00:27:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:22.017 00:27:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:22.017 ************************************ 00:12:22.017 END TEST nvmf_filesystem_in_capsule 00:12:22.017 ************************************ 00:12:22.017 00:27:47 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:22.017 00:27:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:22.017 00:27:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:22.017 00:27:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:22.017 00:27:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:22.017 00:27:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:22.017 00:27:47 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:22.017 rmmod nvme_tcp 00:12:22.017 rmmod nvme_fabrics 00:12:22.017 rmmod nvme_keyring 00:12:22.017 00:27:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:22.017 00:27:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:22.017 00:27:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:22.017 00:27:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:22.017 00:27:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:22.017 00:27:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:22.017 00:27:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:22.017 00:27:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.017 00:27:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:22.017 00:27:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.017 00:27:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.017 00:27:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.552 00:27:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:24.552 00:12:24.552 real 0m35.732s 00:12:24.552 user 1m49.910s 00:12:24.552 sys 0m6.490s 00:12:24.552 00:27:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:24.552 00:27:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:24.552 ************************************ 00:12:24.552 END TEST nvmf_filesystem 00:12:24.552 ************************************ 00:12:24.552 00:27:50 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:24.552 00:27:50 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:24.552 00:27:50 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:24.552 00:27:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:24.552 ************************************ 00:12:24.552 START TEST nvmf_target_discovery 00:12:24.552 ************************************ 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:24.552 * Looking for test storage... 00:12:24.552 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.552 00:27:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:24.553 00:27:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:31.130 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:31.130 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:31.130 Found net devices under 0000:27:00.0: cvl_0_0 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:31.130 Found net devices under 0000:27:00.1: cvl_0_1 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:31.130 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:31.131 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:31.131 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:12:31.131 00:12:31.131 --- 10.0.0.2 ping statistics --- 00:12:31.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.131 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:31.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:31.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:12:31.131 00:12:31.131 --- 10.0.0.1 ping statistics --- 00:12:31.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:31.131 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1881114 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1881114 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@828 -- # '[' -z 1881114 ']' 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:31.131 00:27:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.131 [2024-05-15 00:27:57.026381] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:12:31.131 [2024-05-15 00:27:57.026513] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:31.131 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.131 [2024-05-15 00:27:57.166143] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:31.131 [2024-05-15 00:27:57.261938] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:31.131 [2024-05-15 00:27:57.261986] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:31.131 [2024-05-15 00:27:57.261997] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:31.131 [2024-05-15 00:27:57.262007] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:31.131 [2024-05-15 00:27:57.262016] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:31.131 [2024-05-15 00:27:57.262140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.131 [2024-05-15 00:27:57.262171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.131 [2024-05-15 00:27:57.262144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:31.131 [2024-05-15 00:27:57.262182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@861 -- # return 0 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.703 [2024-05-15 00:27:57.783763] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.703 Null1 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.703 [2024-05-15 00:27:57.835745] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:31.703 [2024-05-15 00:27:57.836066] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.703 Null2 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.703 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.963 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 Null3 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 Null4 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:31.964 00:27:57 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 4420 00:12:31.964 00:12:31.964 Discovery Log Number of Records 6, Generation counter 6 00:12:31.964 =====Discovery Log Entry 0====== 00:12:31.964 trtype: tcp 00:12:31.964 adrfam: ipv4 00:12:31.964 subtype: current discovery subsystem 00:12:31.964 treq: not required 00:12:31.964 portid: 0 00:12:31.964 trsvcid: 4420 00:12:31.964 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:31.964 traddr: 10.0.0.2 00:12:31.964 eflags: explicit discovery connections, duplicate discovery information 00:12:31.964 sectype: none 00:12:31.964 =====Discovery Log Entry 1====== 00:12:31.964 trtype: tcp 00:12:31.964 adrfam: ipv4 00:12:31.964 subtype: nvme subsystem 00:12:31.964 treq: not required 00:12:31.964 portid: 0 00:12:31.964 trsvcid: 4420 00:12:31.964 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:31.964 traddr: 10.0.0.2 00:12:31.964 eflags: none 00:12:31.964 sectype: none 00:12:31.964 =====Discovery Log Entry 2====== 00:12:31.964 trtype: tcp 00:12:31.964 adrfam: ipv4 00:12:31.964 subtype: nvme subsystem 00:12:31.964 treq: not required 00:12:31.964 portid: 0 00:12:31.964 trsvcid: 4420 00:12:31.964 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:31.964 traddr: 10.0.0.2 00:12:31.964 eflags: none 00:12:31.964 sectype: none 00:12:31.964 =====Discovery Log Entry 3====== 00:12:31.964 trtype: tcp 00:12:31.964 adrfam: ipv4 00:12:31.964 subtype: nvme subsystem 00:12:31.964 treq: not required 00:12:31.964 portid: 0 00:12:31.964 trsvcid: 4420 00:12:31.964 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:31.964 traddr: 10.0.0.2 00:12:31.964 eflags: none 00:12:31.964 sectype: none 00:12:31.964 =====Discovery Log Entry 4====== 00:12:31.964 trtype: tcp 00:12:31.964 adrfam: ipv4 00:12:31.964 subtype: nvme subsystem 00:12:31.964 treq: not required 00:12:31.964 portid: 0 00:12:31.964 trsvcid: 4420 00:12:31.964 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:31.964 traddr: 10.0.0.2 00:12:31.964 eflags: none 00:12:31.964 sectype: none 00:12:31.964 =====Discovery Log Entry 5====== 00:12:31.964 trtype: tcp 00:12:31.964 adrfam: ipv4 00:12:31.964 subtype: discovery subsystem referral 00:12:31.964 treq: not required 00:12:31.964 portid: 0 00:12:31.964 trsvcid: 4430 00:12:31.964 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:31.964 traddr: 10.0.0.2 00:12:31.964 eflags: none 00:12:31.964 sectype: none 00:12:31.964 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:31.964 Perform nvmf subsystem discovery via RPC 00:12:31.964 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:31.964 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:31.964 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:31.964 [ 00:12:31.964 { 00:12:31.964 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:31.964 "subtype": "Discovery", 00:12:31.964 "listen_addresses": [ 00:12:31.964 { 00:12:31.964 "trtype": "TCP", 00:12:31.964 "adrfam": "IPv4", 00:12:31.964 "traddr": "10.0.0.2", 00:12:31.964 "trsvcid": "4420" 00:12:31.964 } 00:12:31.964 ], 00:12:31.964 "allow_any_host": true, 00:12:31.964 "hosts": [] 00:12:31.964 }, 00:12:31.964 { 00:12:31.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:31.964 "subtype": "NVMe", 00:12:31.964 "listen_addresses": [ 00:12:31.964 { 00:12:31.964 "trtype": "TCP", 00:12:31.964 "adrfam": "IPv4", 00:12:31.964 "traddr": "10.0.0.2", 00:12:31.964 "trsvcid": "4420" 00:12:31.964 } 00:12:31.964 ], 00:12:31.964 "allow_any_host": true, 00:12:31.964 "hosts": [], 00:12:31.964 "serial_number": "SPDK00000000000001", 00:12:31.964 "model_number": "SPDK bdev Controller", 00:12:31.964 "max_namespaces": 32, 00:12:31.964 "min_cntlid": 1, 00:12:31.964 "max_cntlid": 65519, 00:12:31.964 "namespaces": [ 00:12:31.964 { 00:12:31.964 "nsid": 1, 00:12:31.964 "bdev_name": "Null1", 00:12:31.964 "name": "Null1", 00:12:31.964 "nguid": "B57E708171A9496CA2B09E76880C2381", 00:12:31.964 "uuid": "b57e7081-71a9-496c-a2b0-9e76880c2381" 00:12:31.964 } 00:12:31.964 ] 00:12:31.964 }, 00:12:31.964 { 00:12:32.225 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:32.225 "subtype": "NVMe", 00:12:32.225 "listen_addresses": [ 00:12:32.225 { 00:12:32.225 "trtype": "TCP", 00:12:32.225 "adrfam": "IPv4", 00:12:32.225 "traddr": "10.0.0.2", 00:12:32.225 "trsvcid": "4420" 00:12:32.225 } 00:12:32.225 ], 00:12:32.225 "allow_any_host": true, 00:12:32.225 "hosts": [], 00:12:32.225 "serial_number": "SPDK00000000000002", 00:12:32.225 "model_number": "SPDK bdev Controller", 00:12:32.225 "max_namespaces": 32, 00:12:32.225 "min_cntlid": 1, 00:12:32.225 "max_cntlid": 65519, 00:12:32.225 "namespaces": [ 00:12:32.225 { 00:12:32.225 "nsid": 1, 00:12:32.225 "bdev_name": "Null2", 00:12:32.225 "name": "Null2", 00:12:32.225 "nguid": "CC29ADC0A0304CC8B9DAFADBA2945F51", 00:12:32.225 "uuid": "cc29adc0-a030-4cc8-b9da-fadba2945f51" 00:12:32.225 } 00:12:32.225 ] 00:12:32.225 }, 00:12:32.225 { 00:12:32.225 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:32.225 "subtype": "NVMe", 00:12:32.225 "listen_addresses": [ 00:12:32.225 { 00:12:32.225 "trtype": "TCP", 00:12:32.225 "adrfam": "IPv4", 00:12:32.225 "traddr": "10.0.0.2", 00:12:32.225 "trsvcid": "4420" 00:12:32.225 } 00:12:32.225 ], 00:12:32.225 "allow_any_host": true, 00:12:32.225 "hosts": [], 00:12:32.225 "serial_number": "SPDK00000000000003", 00:12:32.225 "model_number": "SPDK bdev Controller", 00:12:32.225 "max_namespaces": 32, 00:12:32.225 "min_cntlid": 1, 00:12:32.225 "max_cntlid": 65519, 00:12:32.225 "namespaces": [ 00:12:32.225 { 00:12:32.225 "nsid": 1, 00:12:32.225 "bdev_name": "Null3", 00:12:32.225 "name": "Null3", 00:12:32.225 "nguid": "35C29767C1B5432EA170F11C4B203404", 00:12:32.225 "uuid": "35c29767-c1b5-432e-a170-f11c4b203404" 00:12:32.225 } 00:12:32.225 ] 00:12:32.225 }, 00:12:32.225 { 00:12:32.225 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:32.225 "subtype": "NVMe", 00:12:32.225 "listen_addresses": [ 00:12:32.225 { 00:12:32.225 "trtype": "TCP", 00:12:32.225 "adrfam": "IPv4", 00:12:32.225 "traddr": "10.0.0.2", 00:12:32.225 "trsvcid": "4420" 00:12:32.226 } 00:12:32.226 ], 00:12:32.226 "allow_any_host": true, 00:12:32.226 "hosts": [], 00:12:32.226 "serial_number": "SPDK00000000000004", 00:12:32.226 "model_number": "SPDK bdev Controller", 00:12:32.226 "max_namespaces": 32, 00:12:32.226 "min_cntlid": 1, 00:12:32.226 "max_cntlid": 65519, 00:12:32.226 "namespaces": [ 00:12:32.226 { 00:12:32.226 "nsid": 1, 00:12:32.226 "bdev_name": "Null4", 00:12:32.226 "name": "Null4", 00:12:32.226 "nguid": "2BFFFB5E4B654E3FA854014736D1788E", 00:12:32.226 "uuid": "2bfffb5e-4b65-4e3f-a854-014736d1788e" 00:12:32.226 } 00:12:32.226 ] 00:12:32.226 } 00:12:32.226 ] 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:32.226 rmmod nvme_tcp 00:12:32.226 rmmod nvme_fabrics 00:12:32.226 rmmod nvme_keyring 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1881114 ']' 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1881114 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@947 -- # '[' -z 1881114 ']' 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@951 -- # kill -0 1881114 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # uname 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1881114 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1881114' 00:12:32.226 killing process with pid 1881114 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # kill 1881114 00:12:32.226 [2024-05-15 00:27:58.375627] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:32.226 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@971 -- # wait 1881114 00:12:32.798 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:32.798 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:32.798 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:32.798 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:32.798 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:32.798 00:27:58 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.798 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:32.798 00:27:58 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.338 00:28:00 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:35.338 00:12:35.338 real 0m10.735s 00:12:35.338 user 0m7.666s 00:12:35.338 sys 0m5.538s 00:12:35.338 00:28:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:35.338 00:28:00 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:35.338 ************************************ 00:12:35.338 END TEST nvmf_target_discovery 00:12:35.338 ************************************ 00:12:35.338 00:28:00 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:35.338 00:28:00 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:35.338 00:28:00 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:35.338 00:28:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:35.338 ************************************ 00:12:35.338 START TEST nvmf_referrals 00:12:35.338 ************************************ 00:12:35.338 00:28:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:35.338 * Looking for test storage... 00:12:35.338 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:35.338 00:28:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:35.339 00:28:01 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:40.616 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:40.616 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:40.616 Found net devices under 0000:27:00.0: cvl_0_0 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:40.616 Found net devices under 0000:27:00.1: cvl_0_1 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:40.616 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:40.616 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:12:40.616 00:12:40.616 --- 10.0.0.2 ping statistics --- 00:12:40.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.616 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:40.616 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:40.616 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:12:40.616 00:12:40.616 --- 10.0.0.1 ping statistics --- 00:12:40.616 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:40.616 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:40.616 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1885411 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1885411 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@828 -- # '[' -z 1885411 ']' 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:40.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:40.617 00:28:06 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:40.617 [2024-05-15 00:28:06.587089] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:12:40.617 [2024-05-15 00:28:06.587203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.617 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.617 [2024-05-15 00:28:06.725961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.878 [2024-05-15 00:28:06.823285] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.878 [2024-05-15 00:28:06.823332] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.878 [2024-05-15 00:28:06.823342] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.878 [2024-05-15 00:28:06.823352] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.878 [2024-05-15 00:28:06.823360] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.878 [2024-05-15 00:28:06.823477] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.878 [2024-05-15 00:28:06.823567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.878 [2024-05-15 00:28:06.823655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.878 [2024-05-15 00:28:06.823664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:41.138 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:41.138 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@861 -- # return 0 00:12:41.138 00:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:41.138 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:41.138 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.396 [2024-05-15 00:28:07.344263] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.396 [2024-05-15 00:28:07.360215] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:41.396 [2024-05-15 00:28:07.360557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:41.396 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:41.655 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:41.915 00:28:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:41.915 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:41.915 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:41.915 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:41.915 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:41.915 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:41.915 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:41.915 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:42.175 00:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:42.436 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:42.695 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:42.695 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:42.695 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:42.695 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:42.695 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:42.695 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:42.695 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:42.695 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:42.696 00:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:42.696 00:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:42.696 00:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:42.696 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:42.696 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:42.696 00:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:42.696 00:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:42.696 00:28:08 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:42.953 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:42.953 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:42.954 00:28:08 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:42.954 rmmod nvme_tcp 00:12:42.954 rmmod nvme_fabrics 00:12:42.954 rmmod nvme_keyring 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1885411 ']' 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1885411 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@947 -- # '[' -z 1885411 ']' 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@951 -- # kill -0 1885411 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # uname 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1885411 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1885411' 00:12:42.954 killing process with pid 1885411 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # kill 1885411 00:12:42.954 [2024-05-15 00:28:09.076024] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:12:42.954 00:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@971 -- # wait 1885411 00:12:43.521 00:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.521 00:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:43.521 00:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:43.521 00:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.521 00:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.521 00:28:09 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.521 00:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.521 00:28:09 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.468 00:28:11 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:45.766 00:12:45.766 real 0m10.629s 00:12:45.766 user 0m12.585s 00:12:45.766 sys 0m4.855s 00:12:45.766 00:28:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # xtrace_disable 00:12:45.766 00:28:11 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:45.766 ************************************ 00:12:45.766 END TEST nvmf_referrals 00:12:45.766 ************************************ 00:12:45.767 00:28:11 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:45.767 00:28:11 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:12:45.767 00:28:11 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:12:45.767 00:28:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:45.767 ************************************ 00:12:45.767 START TEST nvmf_connect_disconnect 00:12:45.767 ************************************ 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:45.767 * Looking for test storage... 00:12:45.767 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:45.767 00:28:11 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:51.049 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:51.049 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:51.049 Found net devices under 0000:27:00.0: cvl_0_0 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:51.049 Found net devices under 0000:27:00.1: cvl_0_1 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:51.049 00:28:16 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:51.049 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:51.049 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:51.049 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:51.049 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:51.049 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:51.049 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:51.050 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:51.050 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:12:51.050 00:12:51.050 --- 10.0.0.2 ping statistics --- 00:12:51.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.050 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:51.050 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:51.050 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:12:51.050 00:12:51.050 --- 10.0.0.1 ping statistics --- 00:12:51.050 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:51.050 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@721 -- # xtrace_disable 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1889994 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1889994 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@828 -- # '[' -z 1889994 ']' 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local max_retries=100 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:51.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # xtrace_disable 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:51.050 00:28:17 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:51.311 [2024-05-15 00:28:17.265363] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:12:51.311 [2024-05-15 00:28:17.265470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:51.311 EAL: No free 2048 kB hugepages reported on node 1 00:12:51.311 [2024-05-15 00:28:17.394390] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:51.571 [2024-05-15 00:28:17.497047] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:51.571 [2024-05-15 00:28:17.497085] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:51.571 [2024-05-15 00:28:17.497095] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:51.571 [2024-05-15 00:28:17.497105] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:51.571 [2024-05-15 00:28:17.497113] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:51.571 [2024-05-15 00:28:17.497231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:51.571 [2024-05-15 00:28:17.497315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:51.571 [2024-05-15 00:28:17.497412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:51.571 [2024-05-15 00:28:17.497423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@861 -- # return 0 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@727 -- # xtrace_disable 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.137 [2024-05-15 00:28:18.060232] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@560 -- # xtrace_disable 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:52.137 [2024-05-15 00:28:18.128250] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:12:52.137 [2024-05-15 00:28:18.128516] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:52.137 00:28:18 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:56.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:07.095 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:10.384 rmmod nvme_tcp 00:13:10.384 rmmod nvme_fabrics 00:13:10.384 rmmod nvme_keyring 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1889994 ']' 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1889994 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@947 -- # '[' -z 1889994 ']' 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@951 -- # kill -0 1889994 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # uname 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1889994 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1889994' 00:13:10.384 killing process with pid 1889994 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # kill 1889994 00:13:10.384 [2024-05-15 00:28:36.253601] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:10.384 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # wait 1889994 00:13:10.643 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:10.643 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:10.643 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:10.643 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:10.643 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:10.643 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.643 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:10.643 00:28:36 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.186 00:28:38 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:13.186 00:13:13.186 real 0m27.161s 00:13:13.186 user 1m17.826s 00:13:13.186 sys 0m5.201s 00:13:13.186 00:28:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:13.186 00:28:38 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:13.186 ************************************ 00:13:13.186 END TEST nvmf_connect_disconnect 00:13:13.186 ************************************ 00:13:13.186 00:28:38 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:13.186 00:28:38 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:13.186 00:28:38 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:13.186 00:28:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:13.186 ************************************ 00:13:13.186 START TEST nvmf_multitarget 00:13:13.186 ************************************ 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:13.186 * Looking for test storage... 00:13:13.186 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:13.186 00:28:38 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.187 00:28:38 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:13.187 00:28:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:13:13.187 00:28:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:13.187 00:28:39 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:13:13.187 00:28:39 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:19.758 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:19.758 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:13:19.758 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:19.758 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:19.758 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:19.758 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:19.758 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:19.758 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:13:19.758 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:19.758 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:13:19.758 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:19.759 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:19.759 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:19.759 Found net devices under 0000:27:00.0: cvl_0_0 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:19.759 Found net devices under 0000:27:00.1: cvl_0_1 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:19.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:19.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.457 ms 00:13:19.759 00:13:19.759 --- 10.0.0.2 ping statistics --- 00:13:19.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.759 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:19.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:19.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:13:19.759 00:13:19.759 --- 10.0.0.1 ping statistics --- 00:13:19.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:19.759 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1897859 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1897859 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@828 -- # '[' -z 1897859 ']' 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:19.759 00:28:44 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:19.759 [2024-05-15 00:28:44.946230] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:13:19.759 [2024-05-15 00:28:44.946338] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.759 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.759 [2024-05-15 00:28:45.069171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:19.759 [2024-05-15 00:28:45.170080] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:19.759 [2024-05-15 00:28:45.170115] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:19.759 [2024-05-15 00:28:45.170124] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:19.759 [2024-05-15 00:28:45.170134] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:19.760 [2024-05-15 00:28:45.170141] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:19.760 [2024-05-15 00:28:45.170332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:19.760 [2024-05-15 00:28:45.170430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:19.760 [2024-05-15 00:28:45.170531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.760 [2024-05-15 00:28:45.170541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:19.760 00:28:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:19.760 00:28:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@861 -- # return 0 00:13:19.760 00:28:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:19.760 00:28:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:19.760 00:28:45 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:19.760 00:28:45 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:19.760 00:28:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:19.760 00:28:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:19.760 00:28:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:13:19.760 00:28:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:19.760 00:28:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:19.760 "nvmf_tgt_1" 00:13:19.760 00:28:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:20.019 "nvmf_tgt_2" 00:13:20.019 00:28:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:20.019 00:28:45 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:13:20.019 00:28:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:20.019 00:28:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:20.019 true 00:13:20.019 00:28:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:20.278 true 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:20.278 rmmod nvme_tcp 00:13:20.278 rmmod nvme_fabrics 00:13:20.278 rmmod nvme_keyring 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1897859 ']' 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1897859 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@947 -- # '[' -z 1897859 ']' 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@951 -- # kill -0 1897859 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # uname 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:20.278 00:28:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1897859 00:13:20.538 00:28:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:20.538 00:28:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:20.538 00:28:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1897859' 00:13:20.538 killing process with pid 1897859 00:13:20.538 00:28:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # kill 1897859 00:13:20.538 00:28:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@971 -- # wait 1897859 00:13:20.797 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:20.797 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:20.797 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:20.797 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:20.797 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:20.797 00:28:46 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.797 00:28:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.797 00:28:46 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.342 00:28:48 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:23.342 00:13:23.342 real 0m10.098s 00:13:23.342 user 0m8.898s 00:13:23.342 sys 0m4.955s 00:13:23.342 00:28:48 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:23.342 00:28:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:13:23.342 ************************************ 00:13:23.342 END TEST nvmf_multitarget 00:13:23.342 ************************************ 00:13:23.342 00:28:49 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:23.342 00:28:49 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:23.342 00:28:49 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:23.342 00:28:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:23.342 ************************************ 00:13:23.342 START TEST nvmf_rpc 00:13:23.342 ************************************ 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:23.342 * Looking for test storage... 00:13:23.342 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:13:23.342 00:28:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:28.677 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:28.678 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:28.678 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:28.678 Found net devices under 0000:27:00.0: cvl_0_0 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:28.678 Found net devices under 0000:27:00.1: cvl_0_1 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:28.678 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.678 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.638 ms 00:13:28.678 00:13:28.678 --- 10.0.0.2 ping statistics --- 00:13:28.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.678 rtt min/avg/max/mdev = 0.638/0.638/0.638/0.000 ms 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:28.678 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.678 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:13:28.678 00:13:28.678 --- 10.0.0.1 ping statistics --- 00:13:28.678 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.678 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@721 -- # xtrace_disable 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1902094 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1902094 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@828 -- # '[' -z 1902094 ']' 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local max_retries=100 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@837 -- # xtrace_disable 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.678 00:28:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.678 [2024-05-15 00:28:54.702502] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:13:28.678 [2024-05-15 00:28:54.702657] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.678 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.936 [2024-05-15 00:28:54.852606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.936 [2024-05-15 00:28:54.970057] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.936 [2024-05-15 00:28:54.970106] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.936 [2024-05-15 00:28:54.970117] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:28.936 [2024-05-15 00:28:54.970128] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:28.936 [2024-05-15 00:28:54.970138] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.936 [2024-05-15 00:28:54.970254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.936 [2024-05-15 00:28:54.970348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.936 [2024-05-15 00:28:54.970381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.936 [2024-05-15 00:28:54.970370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@861 -- # return 0 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@727 -- # xtrace_disable 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:29.503 "tick_rate": 1900000000, 00:13:29.503 "poll_groups": [ 00:13:29.503 { 00:13:29.503 "name": "nvmf_tgt_poll_group_000", 00:13:29.503 "admin_qpairs": 0, 00:13:29.503 "io_qpairs": 0, 00:13:29.503 "current_admin_qpairs": 0, 00:13:29.503 "current_io_qpairs": 0, 00:13:29.503 "pending_bdev_io": 0, 00:13:29.503 "completed_nvme_io": 0, 00:13:29.503 "transports": [] 00:13:29.503 }, 00:13:29.503 { 00:13:29.503 "name": "nvmf_tgt_poll_group_001", 00:13:29.503 "admin_qpairs": 0, 00:13:29.503 "io_qpairs": 0, 00:13:29.503 "current_admin_qpairs": 0, 00:13:29.503 "current_io_qpairs": 0, 00:13:29.503 "pending_bdev_io": 0, 00:13:29.503 "completed_nvme_io": 0, 00:13:29.503 "transports": [] 00:13:29.503 }, 00:13:29.503 { 00:13:29.503 "name": "nvmf_tgt_poll_group_002", 00:13:29.503 "admin_qpairs": 0, 00:13:29.503 "io_qpairs": 0, 00:13:29.503 "current_admin_qpairs": 0, 00:13:29.503 "current_io_qpairs": 0, 00:13:29.503 "pending_bdev_io": 0, 00:13:29.503 "completed_nvme_io": 0, 00:13:29.503 "transports": [] 00:13:29.503 }, 00:13:29.503 { 00:13:29.503 "name": "nvmf_tgt_poll_group_003", 00:13:29.503 "admin_qpairs": 0, 00:13:29.503 "io_qpairs": 0, 00:13:29.503 "current_admin_qpairs": 0, 00:13:29.503 "current_io_qpairs": 0, 00:13:29.503 "pending_bdev_io": 0, 00:13:29.503 "completed_nvme_io": 0, 00:13:29.503 "transports": [] 00:13:29.503 } 00:13:29.503 ] 00:13:29.503 }' 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.503 [2024-05-15 00:28:55.528011] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.503 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:29.503 "tick_rate": 1900000000, 00:13:29.503 "poll_groups": [ 00:13:29.503 { 00:13:29.503 "name": "nvmf_tgt_poll_group_000", 00:13:29.503 "admin_qpairs": 0, 00:13:29.503 "io_qpairs": 0, 00:13:29.503 "current_admin_qpairs": 0, 00:13:29.503 "current_io_qpairs": 0, 00:13:29.503 "pending_bdev_io": 0, 00:13:29.503 "completed_nvme_io": 0, 00:13:29.503 "transports": [ 00:13:29.503 { 00:13:29.503 "trtype": "TCP" 00:13:29.503 } 00:13:29.503 ] 00:13:29.503 }, 00:13:29.503 { 00:13:29.503 "name": "nvmf_tgt_poll_group_001", 00:13:29.503 "admin_qpairs": 0, 00:13:29.503 "io_qpairs": 0, 00:13:29.503 "current_admin_qpairs": 0, 00:13:29.503 "current_io_qpairs": 0, 00:13:29.503 "pending_bdev_io": 0, 00:13:29.503 "completed_nvme_io": 0, 00:13:29.504 "transports": [ 00:13:29.504 { 00:13:29.504 "trtype": "TCP" 00:13:29.504 } 00:13:29.504 ] 00:13:29.504 }, 00:13:29.504 { 00:13:29.504 "name": "nvmf_tgt_poll_group_002", 00:13:29.504 "admin_qpairs": 0, 00:13:29.504 "io_qpairs": 0, 00:13:29.504 "current_admin_qpairs": 0, 00:13:29.504 "current_io_qpairs": 0, 00:13:29.504 "pending_bdev_io": 0, 00:13:29.504 "completed_nvme_io": 0, 00:13:29.504 "transports": [ 00:13:29.504 { 00:13:29.504 "trtype": "TCP" 00:13:29.504 } 00:13:29.504 ] 00:13:29.504 }, 00:13:29.504 { 00:13:29.504 "name": "nvmf_tgt_poll_group_003", 00:13:29.504 "admin_qpairs": 0, 00:13:29.504 "io_qpairs": 0, 00:13:29.504 "current_admin_qpairs": 0, 00:13:29.504 "current_io_qpairs": 0, 00:13:29.504 "pending_bdev_io": 0, 00:13:29.504 "completed_nvme_io": 0, 00:13:29.504 "transports": [ 00:13:29.504 { 00:13:29.504 "trtype": "TCP" 00:13:29.504 } 00:13:29.504 ] 00:13:29.504 } 00:13:29.504 ] 00:13:29.504 }' 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.504 Malloc1 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:29.504 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.765 [2024-05-15 00:28:55.697795] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:13:29.765 [2024-05-15 00:28:55.698112] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.2 -s 4420 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.2 -s 4420 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.2 -s 4420 00:13:29.765 [2024-05-15 00:28:55.727699] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda' 00:13:29.765 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:29.765 could not add new controller: failed to write to nvme-fabrics device 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:29.765 00:28:55 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:31.142 00:28:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.142 00:28:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:13:31.142 00:28:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.142 00:28:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:13:31.142 00:28:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:33.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@649 -- # local es=0 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@637 -- # local arg=nvme 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # type -t nvme 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # type -P nvme 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # arg=/usr/sbin/nvme 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@643 -- # [[ -x /usr/sbin/nvme ]] 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:33.676 [2024-05-15 00:28:59.513097] ctrlr.c: 816:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda' 00:13:33.676 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:33.676 could not add new controller: failed to write to nvme-fabrics device 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@652 -- # es=1 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:33.676 00:28:59 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:35.056 00:29:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:35.056 00:29:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:13:35.056 00:29:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:13:35.056 00:29:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:13:35.056 00:29:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:13:36.960 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:13:36.960 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:13:36.960 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.960 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:13:36.960 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.960 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:13:36.960 00:29:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:37.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.219 [2024-05-15 00:29:03.279745] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:37.219 00:29:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:38.599 00:29:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:38.599 00:29:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:13:38.599 00:29:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:13:38.599 00:29:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:13:38.599 00:29:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:41.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:41.135 00:29:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.135 [2024-05-15 00:29:07.028604] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:41.135 00:29:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:42.510 00:29:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:42.510 00:29:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:13:42.510 00:29:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:13:42.510 00:29:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:13:42.510 00:29:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:13:44.413 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:13:44.413 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:13:44.413 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:13:44.413 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:13:44.413 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:13:44.413 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:13:44.413 00:29:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.672 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.672 [2024-05-15 00:29:10.757880] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:44.672 00:29:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:46.574 00:29:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.574 00:29:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:13:46.574 00:29:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.574 00:29:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:13:46.574 00:29:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:48.478 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.478 [2024-05-15 00:29:14.521253] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:48.478 00:29:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:49.856 00:29:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:49.856 00:29:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:13:49.856 00:29:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:13:49.856 00:29:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:13:49.856 00:29:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:13:52.390 00:29:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:13:52.390 00:29:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:13:52.390 00:29:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:13:52.390 00:29:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:13:52.390 00:29:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:13:52.390 00:29:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:13:52.390 00:29:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:52.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.390 [2024-05-15 00:29:18.254764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:52.390 00:29:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:53.767 00:29:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:53.767 00:29:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1195 -- # local i=0 00:13:53.767 00:29:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.767 00:29:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:13:53.767 00:29:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1202 -- # sleep 2 00:13:55.668 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:13:55.668 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:13:55.668 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.668 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:13:55.668 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.668 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # return 0 00:13:55.668 00:29:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1216 -- # local i=0 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1228 -- # return 0 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.930 [2024-05-15 00:29:21.968582] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.930 00:29:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.931 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.931 00:29:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.931 [2024-05-15 00:29:22.016585] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.931 [2024-05-15 00:29:22.064613] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:55.931 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 [2024-05-15 00:29:22.112651] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 [2024-05-15 00:29:22.160755] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@560 -- # xtrace_disable 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:56.257 "tick_rate": 1900000000, 00:13:56.257 "poll_groups": [ 00:13:56.257 { 00:13:56.257 "name": "nvmf_tgt_poll_group_000", 00:13:56.257 "admin_qpairs": 0, 00:13:56.257 "io_qpairs": 224, 00:13:56.257 "current_admin_qpairs": 0, 00:13:56.257 "current_io_qpairs": 0, 00:13:56.257 "pending_bdev_io": 0, 00:13:56.257 "completed_nvme_io": 268, 00:13:56.257 "transports": [ 00:13:56.257 { 00:13:56.257 "trtype": "TCP" 00:13:56.257 } 00:13:56.257 ] 00:13:56.257 }, 00:13:56.257 { 00:13:56.257 "name": "nvmf_tgt_poll_group_001", 00:13:56.257 "admin_qpairs": 1, 00:13:56.257 "io_qpairs": 223, 00:13:56.257 "current_admin_qpairs": 0, 00:13:56.257 "current_io_qpairs": 0, 00:13:56.257 "pending_bdev_io": 0, 00:13:56.257 "completed_nvme_io": 274, 00:13:56.257 "transports": [ 00:13:56.257 { 00:13:56.257 "trtype": "TCP" 00:13:56.257 } 00:13:56.257 ] 00:13:56.257 }, 00:13:56.257 { 00:13:56.257 "name": "nvmf_tgt_poll_group_002", 00:13:56.257 "admin_qpairs": 6, 00:13:56.257 "io_qpairs": 218, 00:13:56.257 "current_admin_qpairs": 0, 00:13:56.257 "current_io_qpairs": 0, 00:13:56.257 "pending_bdev_io": 0, 00:13:56.257 "completed_nvme_io": 422, 00:13:56.257 "transports": [ 00:13:56.257 { 00:13:56.257 "trtype": "TCP" 00:13:56.257 } 00:13:56.257 ] 00:13:56.257 }, 00:13:56.257 { 00:13:56.257 "name": "nvmf_tgt_poll_group_003", 00:13:56.257 "admin_qpairs": 0, 00:13:56.257 "io_qpairs": 224, 00:13:56.257 "current_admin_qpairs": 0, 00:13:56.257 "current_io_qpairs": 0, 00:13:56.257 "pending_bdev_io": 0, 00:13:56.257 "completed_nvme_io": 275, 00:13:56.257 "transports": [ 00:13:56.257 { 00:13:56.257 "trtype": "TCP" 00:13:56.257 } 00:13:56.257 ] 00:13:56.257 } 00:13:56.257 ] 00:13:56.257 }' 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:56.257 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:56.258 rmmod nvme_tcp 00:13:56.258 rmmod nvme_fabrics 00:13:56.258 rmmod nvme_keyring 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1902094 ']' 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1902094 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@947 -- # '[' -z 1902094 ']' 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@951 -- # kill -0 1902094 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # uname 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:13:56.258 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1902094 00:13:56.542 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:13:56.542 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:13:56.542 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1902094' 00:13:56.542 killing process with pid 1902094 00:13:56.542 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # kill 1902094 00:13:56.542 [2024-05-15 00:29:22.426002] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:13:56.542 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@971 -- # wait 1902094 00:13:57.110 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:57.110 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:57.110 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:57.110 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.110 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:57.110 00:29:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.110 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:57.110 00:29:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.011 00:29:25 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:59.011 00:13:59.011 real 0m36.007s 00:13:59.011 user 1m52.656s 00:13:59.011 sys 0m5.885s 00:13:59.011 00:29:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:13:59.011 00:29:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:59.011 ************************************ 00:13:59.011 END TEST nvmf_rpc 00:13:59.011 ************************************ 00:13:59.011 00:29:25 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:59.011 00:29:25 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:13:59.011 00:29:25 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:13:59.011 00:29:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:59.011 ************************************ 00:13:59.011 START TEST nvmf_invalid 00:13:59.011 ************************************ 00:13:59.011 00:29:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:59.270 * Looking for test storage... 00:13:59.270 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:59.270 00:29:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.270 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:59.270 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.270 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:59.271 00:29:25 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.541 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:04.542 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:04.542 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:04.542 Found net devices under 0000:27:00.0: cvl_0_0 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:04.542 Found net devices under 0000:27:00.1: cvl_0_1 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.542 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:04.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.717 ms 00:14:04.800 00:14:04.800 --- 10.0.0.2 ping statistics --- 00:14:04.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.800 rtt min/avg/max/mdev = 0.717/0.717/0.717/0.000 ms 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:14:04.800 00:14:04.800 --- 10.0.0.1 ping statistics --- 00:14:04.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.800 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.800 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:04.801 00:29:30 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:14:04.801 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:04.801 00:29:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:04.801 00:29:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:04.801 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1911678 00:14:04.801 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1911678 00:14:04.801 00:29:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@828 -- # '[' -z 1911678 ']' 00:14:04.801 00:29:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.801 00:29:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:04.801 00:29:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.801 00:29:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:04.801 00:29:30 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:04.801 00:29:30 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:04.801 [2024-05-15 00:29:30.936808] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:14:04.801 [2024-05-15 00:29:30.936917] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.059 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.059 [2024-05-15 00:29:31.066853] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:05.059 [2024-05-15 00:29:31.168894] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.059 [2024-05-15 00:29:31.168932] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.059 [2024-05-15 00:29:31.168942] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.059 [2024-05-15 00:29:31.168952] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.059 [2024-05-15 00:29:31.168960] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.059 [2024-05-15 00:29:31.169162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.059 [2024-05-15 00:29:31.169250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.059 [2024-05-15 00:29:31.169350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.059 [2024-05-15 00:29:31.169362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:05.624 00:29:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:05.624 00:29:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@861 -- # return 0 00:14:05.624 00:29:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:05.624 00:29:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:05.624 00:29:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:05.624 00:29:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.624 00:29:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:05.624 00:29:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode16600 00:14:05.624 [2024-05-15 00:29:31.784270] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:14:05.883 00:29:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:14:05.883 { 00:14:05.883 "nqn": "nqn.2016-06.io.spdk:cnode16600", 00:14:05.883 "tgt_name": "foobar", 00:14:05.883 "method": "nvmf_create_subsystem", 00:14:05.883 "req_id": 1 00:14:05.883 } 00:14:05.883 Got JSON-RPC error response 00:14:05.883 response: 00:14:05.883 { 00:14:05.883 "code": -32603, 00:14:05.883 "message": "Unable to find target foobar" 00:14:05.883 }' 00:14:05.883 00:29:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:14:05.883 { 00:14:05.883 "nqn": "nqn.2016-06.io.spdk:cnode16600", 00:14:05.883 "tgt_name": "foobar", 00:14:05.883 "method": "nvmf_create_subsystem", 00:14:05.883 "req_id": 1 00:14:05.883 } 00:14:05.883 Got JSON-RPC error response 00:14:05.883 response: 00:14:05.883 { 00:14:05.883 "code": -32603, 00:14:05.883 "message": "Unable to find target foobar" 00:14:05.883 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:14:05.883 00:29:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:14:05.883 00:29:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16691 00:14:05.883 [2024-05-15 00:29:31.928506] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16691: invalid serial number 'SPDKISFASTANDAWESOME' 00:14:05.883 00:29:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:14:05.883 { 00:14:05.883 "nqn": "nqn.2016-06.io.spdk:cnode16691", 00:14:05.883 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:05.883 "method": "nvmf_create_subsystem", 00:14:05.883 "req_id": 1 00:14:05.883 } 00:14:05.883 Got JSON-RPC error response 00:14:05.883 response: 00:14:05.883 { 00:14:05.883 "code": -32602, 00:14:05.883 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:05.883 }' 00:14:05.883 00:29:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:14:05.883 { 00:14:05.883 "nqn": "nqn.2016-06.io.spdk:cnode16691", 00:14:05.883 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:14:05.883 "method": "nvmf_create_subsystem", 00:14:05.883 "req_id": 1 00:14:05.883 } 00:14:05.883 Got JSON-RPC error response 00:14:05.883 response: 00:14:05.883 { 00:14:05.883 "code": -32602, 00:14:05.883 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:14:05.883 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:14:05.883 00:29:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:14:05.883 00:29:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode18709 00:14:06.142 [2024-05-15 00:29:32.080742] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18709: invalid model number 'SPDK_Controller' 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:14:06.142 { 00:14:06.142 "nqn": "nqn.2016-06.io.spdk:cnode18709", 00:14:06.142 "model_number": "SPDK_Controller\u001f", 00:14:06.142 "method": "nvmf_create_subsystem", 00:14:06.142 "req_id": 1 00:14:06.142 } 00:14:06.142 Got JSON-RPC error response 00:14:06.142 response: 00:14:06.142 { 00:14:06.142 "code": -32602, 00:14:06.142 "message": "Invalid MN SPDK_Controller\u001f" 00:14:06.142 }' 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:14:06.142 { 00:14:06.142 "nqn": "nqn.2016-06.io.spdk:cnode18709", 00:14:06.142 "model_number": "SPDK_Controller\u001f", 00:14:06.142 "method": "nvmf_create_subsystem", 00:14:06.142 "req_id": 1 00:14:06.142 } 00:14:06.142 Got JSON-RPC error response 00:14:06.142 response: 00:14:06.142 { 00:14:06.142 "code": -32602, 00:14:06.142 "message": "Invalid MN SPDK_Controller\u001f" 00:14:06.142 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.142 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ < == \- ]] 00:14:06.143 00:29:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo ' /dev/null' 00:14:08.801 00:29:34 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.709 00:29:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:10.709 00:14:10.709 real 0m11.650s 00:14:10.709 user 0m17.278s 00:14:10.709 sys 0m5.190s 00:14:10.709 00:29:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:10.709 00:29:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:14:10.709 ************************************ 00:14:10.709 END TEST nvmf_invalid 00:14:10.709 ************************************ 00:14:10.709 00:29:36 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:10.709 00:29:36 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:10.709 00:29:36 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:10.709 00:29:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:10.709 ************************************ 00:14:10.709 START TEST nvmf_abort 00:14:10.709 ************************************ 00:14:10.709 00:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:14:10.968 * Looking for test storage... 00:14:10.968 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:14:10.968 00:29:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:16.239 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:16.239 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:16.239 Found net devices under 0000:27:00.0: cvl_0_0 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:16.239 Found net devices under 0000:27:00.1: cvl_0_1 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:16.239 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:16.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:16.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.669 ms 00:14:16.240 00:14:16.240 --- 10.0.0.2 ping statistics --- 00:14:16.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.240 rtt min/avg/max/mdev = 0.669/0.669/0.669/0.000 ms 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:16.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:16.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:14:16.240 00:14:16.240 --- 10.0.0.1 ping statistics --- 00:14:16.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:16.240 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1916468 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1916468 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@828 -- # '[' -z 1916468 ']' 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:16.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:16.240 00:29:42 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:16.497 [2024-05-15 00:29:42.457003] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:14:16.497 [2024-05-15 00:29:42.457103] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:16.497 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.497 [2024-05-15 00:29:42.599756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:16.756 [2024-05-15 00:29:42.761579] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:16.756 [2024-05-15 00:29:42.761642] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:16.756 [2024-05-15 00:29:42.761659] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:16.756 [2024-05-15 00:29:42.761675] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:16.756 [2024-05-15 00:29:42.761689] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:16.756 [2024-05-15 00:29:42.761879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:16.756 [2024-05-15 00:29:42.761997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:16.756 [2024-05-15 00:29:42.762007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:17.014 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:17.014 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@861 -- # return 0 00:14:17.014 00:29:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:17.014 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:17.014 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:17.272 [2024-05-15 00:29:43.221221] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:17.272 Malloc0 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:17.272 Delay0 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:17.272 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.273 00:29:43 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:17.273 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.273 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:17.273 [2024-05-15 00:29:43.325747] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:17.273 [2024-05-15 00:29:43.326113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:17.273 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.273 00:29:43 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:17.273 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:17.273 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:17.273 00:29:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:17.273 00:29:43 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:14:17.273 EAL: No free 2048 kB hugepages reported on node 1 00:14:17.531 [2024-05-15 00:29:43.509855] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:20.059 Initializing NVMe Controllers 00:14:20.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:20.059 controller IO queue size 128 less than required 00:14:20.059 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:14:20.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:14:20.059 Initialization complete. Launching workers. 00:14:20.059 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 126, failed: 47565 00:14:20.059 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 47629, failed to submit 62 00:14:20.059 success 47569, unsuccess 60, failed 0 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:20.059 rmmod nvme_tcp 00:14:20.059 rmmod nvme_fabrics 00:14:20.059 rmmod nvme_keyring 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1916468 ']' 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1916468 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@947 -- # '[' -z 1916468 ']' 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@951 -- # kill -0 1916468 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # uname 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1916468 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1916468' 00:14:20.059 killing process with pid 1916468 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # kill 1916468 00:14:20.059 [2024-05-15 00:29:45.781483] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:14:20.059 00:29:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@971 -- # wait 1916468 00:14:20.318 00:29:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:20.318 00:29:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:20.318 00:29:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:20.318 00:29:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:20.318 00:29:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:20.318 00:29:46 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.318 00:29:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:20.318 00:29:46 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.233 00:29:48 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:22.233 00:14:22.233 real 0m11.502s 00:14:22.233 user 0m13.986s 00:14:22.233 sys 0m4.740s 00:14:22.233 00:29:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:14:22.233 00:29:48 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:14:22.233 ************************************ 00:14:22.233 END TEST nvmf_abort 00:14:22.233 ************************************ 00:14:22.491 00:29:48 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:22.491 00:29:48 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:14:22.491 00:29:48 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:14:22.491 00:29:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:22.491 ************************************ 00:14:22.491 START TEST nvmf_ns_hotplug_stress 00:14:22.491 ************************************ 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:14:22.491 * Looking for test storage... 00:14:22.491 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:22.491 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:14:22.492 00:29:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:27.758 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:27.758 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:27.758 Found net devices under 0000:27:00.0: cvl_0_0 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:27.758 Found net devices under 0000:27:00.1: cvl_0_1 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:27.758 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:27.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:27.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:14:27.759 00:14:27.759 --- 10.0.0.2 ping statistics --- 00:14:27.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.759 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:27.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:27.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:14:27.759 00:14:27.759 --- 10.0.0.1 ping statistics --- 00:14:27.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:27.759 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1920979 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1920979 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@828 -- # '[' -z 1920979 ']' 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:27.759 00:29:53 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:28.019 [2024-05-15 00:29:53.960664] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:14:28.019 [2024-05-15 00:29:53.960768] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.019 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.019 [2024-05-15 00:29:54.108105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:28.277 [2024-05-15 00:29:54.268900] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.277 [2024-05-15 00:29:54.268955] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.277 [2024-05-15 00:29:54.268971] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.277 [2024-05-15 00:29:54.268987] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.277 [2024-05-15 00:29:54.269001] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.277 [2024-05-15 00:29:54.269084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.277 [2024-05-15 00:29:54.269194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.277 [2024-05-15 00:29:54.269205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.567 00:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:14:28.567 00:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@861 -- # return 0 00:14:28.567 00:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:28.567 00:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:14:28.567 00:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:14:28.567 00:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:28.567 00:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:14:28.567 00:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:28.852 [2024-05-15 00:29:54.853144] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:28.852 00:29:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:29.109 00:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.109 [2024-05-15 00:29:55.148103] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:14:29.109 [2024-05-15 00:29:55.148436] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.109 00:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:29.368 00:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:14:29.368 Malloc0 00:14:29.368 00:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:29.626 Delay0 00:14:29.626 00:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:29.884 00:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:14:29.884 NULL1 00:14:29.884 00:29:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:30.144 00:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:14:30.144 00:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1921593 00:14:30.144 00:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:30.144 00:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.144 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.144 00:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.405 00:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:14:30.405 00:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:14:30.405 [2024-05-15 00:29:56.558049] bdev.c:4995:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:14:30.405 true 00:14:30.664 00:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:30.664 00:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.664 00:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:30.924 00:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:14:30.924 00:29:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:14:30.924 true 00:14:30.924 00:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:30.924 00:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.180 00:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.438 00:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:14:31.438 00:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:14:31.438 true 00:14:31.438 00:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:31.438 00:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:31.698 00:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:31.698 00:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:14:31.698 00:29:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:14:31.956 true 00:14:31.956 00:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:31.956 00:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.214 00:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.214 00:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:14:32.214 00:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:32.472 true 00:14:32.472 00:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:32.472 00:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.472 00:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:32.729 00:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:14:32.729 00:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:32.729 true 00:14:32.987 00:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:32.987 00:29:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:32.987 00:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.245 00:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:14:33.245 00:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:33.245 true 00:14:33.245 00:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:33.245 00:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:33.504 00:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:33.764 00:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:14:33.764 00:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:33.764 true 00:14:33.764 00:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:33.764 00:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.022 00:29:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.022 00:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:14:34.022 00:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:34.279 true 00:14:34.279 00:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:34.279 00:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.279 00:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:34.538 00:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:14:34.538 00:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:34.797 true 00:14:34.797 00:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:34.797 00:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:34.797 00:30:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.056 00:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:14:35.056 00:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:35.056 true 00:14:35.056 00:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:35.056 00:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.313 00:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:35.571 00:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:14:35.571 00:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:35.571 true 00:14:35.571 00:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:35.571 00:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:35.828 00:30:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.088 00:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:14:36.088 00:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:36.088 true 00:14:36.088 00:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:36.088 00:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.347 00:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.347 00:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:14:36.347 00:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:36.607 true 00:14:36.607 00:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:36.607 00:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:36.868 00:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.868 00:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:14:36.868 00:30:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:37.126 true 00:14:37.126 00:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:37.126 00:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.126 00:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.388 00:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:14:37.388 00:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:37.650 true 00:14:37.650 00:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:37.650 00:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:37.650 00:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:37.908 00:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:14:37.908 00:30:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:37.908 true 00:14:37.908 00:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:37.908 00:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.168 00:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.428 00:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:14:38.428 00:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:38.428 true 00:14:38.428 00:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:38.428 00:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:38.685 00:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:38.942 00:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:14:38.942 00:30:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:38.942 true 00:14:38.942 00:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:38.942 00:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.200 00:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.200 00:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:14:39.200 00:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:39.459 true 00:14:39.459 00:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:39.459 00:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:39.719 00:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:39.719 00:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:14:39.719 00:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:39.979 true 00:14:39.979 00:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:39.979 00:30:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.239 00:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.239 00:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:14:40.239 00:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:40.497 true 00:14:40.497 00:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:40.497 00:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.497 00:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.755 00:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:14:40.755 00:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:40.755 true 00:14:41.014 00:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:41.014 00:30:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.014 00:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.274 00:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:14:41.274 00:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:41.274 true 00:14:41.274 00:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:41.274 00:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:41.535 00:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:41.535 00:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:14:41.535 00:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:41.814 true 00:14:41.814 00:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:41.814 00:30:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.073 00:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.073 00:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:14:42.073 00:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:42.330 true 00:14:42.330 00:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:42.330 00:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.330 00:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:42.589 00:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:14:42.589 00:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:42.850 true 00:14:42.850 00:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:42.850 00:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:42.850 00:30:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.111 00:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:14:43.111 00:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:43.111 true 00:14:43.111 00:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:43.111 00:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.372 00:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.632 00:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:14:43.632 00:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:14:43.632 true 00:14:43.632 00:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:43.632 00:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:43.890 00:30:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:43.890 00:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:14:43.890 00:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:14:44.148 true 00:14:44.148 00:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:44.148 00:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.408 00:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.408 00:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:14:44.408 00:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:14:44.668 true 00:14:44.668 00:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:44.668 00:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:44.668 00:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:44.926 00:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:14:44.926 00:30:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:14:45.184 true 00:14:45.184 00:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:45.184 00:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.184 00:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.443 00:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:14:45.443 00:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:14:45.443 true 00:14:45.443 00:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:45.443 00:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:45.701 00:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:45.961 00:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:14:45.961 00:30:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:14:45.961 true 00:14:45.961 00:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:45.961 00:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.222 00:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.222 00:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:14:46.222 00:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:14:46.482 true 00:14:46.482 00:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:46.482 00:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:46.741 00:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:46.741 00:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:14:46.741 00:30:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:14:46.999 true 00:14:46.999 00:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:46.999 00:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.258 00:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.258 00:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:14:47.258 00:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:14:47.527 true 00:14:47.527 00:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:47.527 00:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:47.527 00:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:47.790 00:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:14:47.790 00:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:14:47.790 true 00:14:47.790 00:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:47.790 00:30:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.050 00:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.308 00:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:14:48.308 00:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:14:48.308 true 00:14:48.308 00:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:48.308 00:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.566 00:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:48.566 00:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:14:48.566 00:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:14:48.827 true 00:14:48.827 00:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:48.827 00:30:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.158 00:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.158 00:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:14:49.158 00:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:14:49.158 true 00:14:49.158 00:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:49.158 00:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.437 00:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.697 00:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:14:49.697 00:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:14:49.697 true 00:14:49.697 00:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:49.697 00:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:49.956 00:30:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:49.956 00:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:14:49.956 00:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:14:50.214 true 00:14:50.214 00:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:50.214 00:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.472 00:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.472 00:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:14:50.472 00:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:14:50.732 true 00:14:50.732 00:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:50.732 00:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:50.732 00:30:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:50.993 00:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:14:50.993 00:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:14:50.993 true 00:14:51.252 00:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:51.252 00:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.252 00:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:51.510 00:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:14:51.510 00:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:14:51.510 true 00:14:51.510 00:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:51.510 00:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:51.768 00:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.026 00:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:14:52.026 00:30:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:14:52.026 true 00:14:52.026 00:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:52.026 00:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.286 00:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.286 00:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:14:52.286 00:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:14:52.545 true 00:14:52.545 00:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:52.545 00:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:52.805 00:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:52.805 00:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:14:52.805 00:30:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:14:53.066 true 00:14:53.066 00:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:53.066 00:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.066 00:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.324 00:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:14:53.324 00:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:14:53.324 true 00:14:53.582 00:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:53.582 00:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:53.582 00:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:53.841 00:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:14:53.841 00:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:14:53.841 true 00:14:53.841 00:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:53.841 00:30:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.101 00:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.101 00:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:14:54.101 00:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:14:54.360 true 00:14:54.360 00:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:54.360 00:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.619 00:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:54.619 00:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:14:54.619 00:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:14:54.877 true 00:14:54.877 00:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:54.877 00:30:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:54.877 00:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.135 00:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:14:55.135 00:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:14:55.394 true 00:14:55.394 00:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:55.394 00:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.394 00:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:55.654 00:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:14:55.654 00:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:14:55.654 true 00:14:55.654 00:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:55.654 00:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:55.912 00:30:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.172 00:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1056 00:14:56.172 00:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:14:56.172 true 00:14:56.172 00:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:56.172 00:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.431 00:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.431 00:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1057 00:14:56.431 00:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:14:56.689 true 00:14:56.689 00:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:56.689 00:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:56.949 00:30:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:56.949 00:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1058 00:14:56.949 00:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:14:57.209 true 00:14:57.209 00:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:57.209 00:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.209 00:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:57.484 00:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1059 00:14:57.485 00:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:14:57.485 true 00:14:57.485 00:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:57.485 00:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.747 00:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.005 00:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1060 00:14:58.005 00:30:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:14:58.005 true 00:14:58.005 00:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:58.005 00:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.263 00:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.263 00:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1061 00:14:58.263 00:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:14:58.523 true 00:14:58.523 00:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:58.523 00:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:58.523 00:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:58.783 00:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1062 00:14:58.783 00:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:14:59.042 true 00:14:59.042 00:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:59.042 00:30:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.042 00:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:59.302 00:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1063 00:14:59.302 00:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:14:59.560 true 00:14:59.560 00:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:59.560 00:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:59.560 00:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:59.820 00:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1064 00:14:59.820 00:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1064 00:14:59.820 true 00:14:59.820 00:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:14:59.820 00:30:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.080 00:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.080 00:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1065 00:15:00.080 00:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1065 00:15:00.341 true 00:15:00.341 00:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:15:00.341 00:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:00.341 Initializing NVMe Controllers 00:15:00.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:00.341 Controller IO queue size 128, less than required. 00:15:00.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:00.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:15:00.341 Initialization complete. Launching workers. 00:15:00.341 ======================================================== 00:15:00.341 Latency(us) 00:15:00.341 Device Information : IOPS MiB/s Average min max 00:15:00.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27763.66 13.56 4611.10 3080.67 43939.66 00:15:00.341 ======================================================== 00:15:00.341 Total : 27763.66 13.56 4611.10 3080.67 43939.66 00:15:00.341 00:15:00.601 00:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:00.601 00:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1066 00:15:00.602 00:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1066 00:15:00.860 true 00:15:00.860 00:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1921593 00:15:00.860 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1921593) - No such process 00:15:00.860 00:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1921593 00:15:00.860 00:30:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:01.120 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:01.120 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:15:01.120 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:15:01.120 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:15:01.120 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:01.120 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:15:01.380 null0 00:15:01.380 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:01.380 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:01.380 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:15:01.380 null1 00:15:01.380 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:01.380 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:01.380 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:15:01.640 null2 00:15:01.640 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:01.640 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:01.640 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:15:01.901 null3 00:15:01.901 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:01.901 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:01.901 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:15:01.901 null4 00:15:01.901 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:01.901 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:01.901 00:30:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:15:02.161 null5 00:15:02.161 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:02.161 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:02.161 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:15:02.161 null6 00:15:02.161 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:02.161 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:02.161 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:15:02.420 null7 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:02.420 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1928358 1928361 1928363 1928364 1928366 1928369 1928370 1928374 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.421 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:02.679 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.680 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:02.940 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.940 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.940 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:02.940 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:02.940 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:02.940 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:02.940 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:02.940 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:02.940 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:02.940 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:02.940 00:30:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:02.940 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.940 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.940 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:02.940 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.940 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.940 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:02.940 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.940 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.940 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:02.940 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:02.940 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:02.940 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:03.200 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:03.460 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:03.720 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:03.720 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.720 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:03.721 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:03.978 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:03.978 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:03.978 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.978 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.978 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:03.978 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:03.978 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:03.978 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:03.979 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.979 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.979 00:30:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:03.979 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:03.979 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.979 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.979 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:03.979 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.979 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.979 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:03.979 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:03.979 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:03.979 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:03.979 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.240 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.499 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.500 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:04.500 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:04.500 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:04.500 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:04.758 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.759 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.759 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:04.759 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:04.759 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.759 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.759 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:04.759 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.759 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.759 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:04.759 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:04.759 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:04.759 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:04.759 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:05.017 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:05.017 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.017 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:05.017 00:30:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.017 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.273 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.531 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:05.788 00:30:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:06.046 rmmod nvme_tcp 00:15:06.046 rmmod nvme_fabrics 00:15:06.046 rmmod nvme_keyring 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1920979 ']' 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1920979 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@947 -- # '[' -z 1920979 ']' 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@951 -- # kill -0 1920979 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # uname 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1920979 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:15:06.046 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1920979' 00:15:06.047 killing process with pid 1920979 00:15:06.047 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # kill 1920979 00:15:06.047 [2024-05-15 00:30:32.179101] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:06.047 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # wait 1920979 00:15:06.613 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:06.613 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:06.613 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:06.613 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:06.613 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:06.613 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.613 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.613 00:30:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.142 00:30:34 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:09.142 00:15:09.142 real 0m46.286s 00:15:09.142 user 3m13.811s 00:15:09.142 sys 0m15.717s 00:15:09.142 00:30:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:09.142 00:30:34 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:15:09.142 ************************************ 00:15:09.142 END TEST nvmf_ns_hotplug_stress 00:15:09.142 ************************************ 00:15:09.142 00:30:34 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:09.142 00:30:34 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:09.142 00:30:34 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:09.142 00:30:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:09.142 ************************************ 00:15:09.142 START TEST nvmf_connect_stress 00:15:09.142 ************************************ 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:09.142 * Looking for test storage... 00:15:09.142 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:15:09.142 00:30:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:14.454 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:14.454 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:14.454 Found net devices under 0000:27:00.0: cvl_0_0 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:14.454 Found net devices under 0000:27:00.1: cvl_0_1 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.454 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:14.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.682 ms 00:15:14.455 00:15:14.455 --- 10.0.0.2 ping statistics --- 00:15:14.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.455 rtt min/avg/max/mdev = 0.682/0.682/0.682/0.000 ms 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:14.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:15:14.455 00:15:14.455 --- 10.0.0.1 ping statistics --- 00:15:14.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.455 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1933240 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1933240 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@828 -- # '[' -z 1933240 ']' 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:14.455 00:30:40 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:14.714 [2024-05-15 00:30:40.652374] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:15:14.715 [2024-05-15 00:30:40.652503] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.715 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.715 [2024-05-15 00:30:40.818614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:14.973 [2024-05-15 00:30:40.986396] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.974 [2024-05-15 00:30:40.986466] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.974 [2024-05-15 00:30:40.986487] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.974 [2024-05-15 00:30:40.986503] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.974 [2024-05-15 00:30:40.986517] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.974 [2024-05-15 00:30:40.986712] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.974 [2024-05-15 00:30:40.986827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.974 [2024-05-15 00:30:40.986837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:15.233 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:15.233 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@861 -- # return 0 00:15:15.233 00:30:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:15.233 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:15.233 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.494 00:30:41 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.494 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:15.494 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.494 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.494 [2024-05-15 00:30:41.410981] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.494 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.495 [2024-05-15 00:30:41.450115] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:15.495 [2024-05-15 00:30:41.450512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.495 NULL1 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1933550 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.495 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:15.755 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:15.755 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:15.755 00:30:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:15.755 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:15.755 00:30:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.321 00:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.321 00:30:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:16.321 00:30:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.321 00:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:16.321 00:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.579 00:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.579 00:30:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:16.579 00:30:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.579 00:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:16.579 00:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:16.839 00:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:16.839 00:30:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:16.839 00:30:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:16.839 00:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:16.839 00:30:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.099 00:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.099 00:30:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:17.099 00:30:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.099 00:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.099 00:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.359 00:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.359 00:30:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:17.359 00:30:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.359 00:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.359 00:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:17.925 00:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:17.925 00:30:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:17.925 00:30:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:17.925 00:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:17.925 00:30:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.183 00:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:18.183 00:30:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:18.183 00:30:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.183 00:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:18.183 00:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.443 00:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:18.443 00:30:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:18.443 00:30:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.443 00:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:18.443 00:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.703 00:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:18.703 00:30:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:18.703 00:30:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.703 00:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:18.703 00:30:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:18.962 00:30:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:18.962 00:30:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:18.962 00:30:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:18.962 00:30:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:18.962 00:30:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.528 00:30:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:19.528 00:30:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:19.528 00:30:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.528 00:30:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:19.528 00:30:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.786 00:30:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:19.786 00:30:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:19.786 00:30:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.786 00:30:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:19.786 00:30:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.046 00:30:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.046 00:30:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:20.046 00:30:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.046 00:30:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.046 00:30:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.306 00:30:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.306 00:30:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:20.306 00:30:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.306 00:30:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.306 00:30:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.564 00:30:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:20.564 00:30:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:20.564 00:30:46 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.564 00:30:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:20.564 00:30:46 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.132 00:30:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.132 00:30:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:21.132 00:30:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.132 00:30:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.132 00:30:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.389 00:30:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.389 00:30:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:21.389 00:30:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.389 00:30:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.389 00:30:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.648 00:30:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.648 00:30:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:21.648 00:30:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.648 00:30:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.648 00:30:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.909 00:30:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:21.909 00:30:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:21.909 00:30:47 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.909 00:30:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:21.909 00:30:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.167 00:30:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.167 00:30:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:22.167 00:30:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.167 00:30:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.167 00:30:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.733 00:30:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.733 00:30:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:22.733 00:30:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.733 00:30:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.733 00:30:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.991 00:30:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:22.991 00:30:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:22.991 00:30:48 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.991 00:30:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:22.991 00:30:48 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.249 00:30:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:23.249 00:30:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:23.249 00:30:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.249 00:30:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:23.249 00:30:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.508 00:30:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:23.508 00:30:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:23.508 00:30:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.508 00:30:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:23.508 00:30:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.767 00:30:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:23.767 00:30:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:23.767 00:30:49 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.767 00:30:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:23.767 00:30:49 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.335 00:30:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.335 00:30:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:24.335 00:30:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.335 00:30:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.335 00:30:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.592 00:30:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.592 00:30:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:24.592 00:30:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.592 00:30:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.592 00:30:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.851 00:30:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:24.851 00:30:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:24.851 00:30:50 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.851 00:30:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:24.851 00:30:50 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.153 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.153 00:30:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:25.153 00:30:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.153 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.153 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.440 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.440 00:30:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:25.440 00:30:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.440 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:25.440 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.440 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:25.698 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:25.698 00:30:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1933550 00:15:25.698 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1933550) - No such process 00:15:25.698 00:30:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1933550 00:15:25.698 00:30:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:25.698 00:30:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:25.698 00:30:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:25.698 00:30:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:25.698 00:30:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:15:25.698 00:30:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:25.698 00:30:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:15:25.698 00:30:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:25.698 00:30:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:25.698 rmmod nvme_tcp 00:15:25.698 rmmod nvme_fabrics 00:15:25.956 rmmod nvme_keyring 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1933240 ']' 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1933240 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@947 -- # '[' -z 1933240 ']' 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@951 -- # kill -0 1933240 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # uname 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1933240 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1933240' 00:15:25.956 killing process with pid 1933240 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # kill 1933240 00:15:25.956 [2024-05-15 00:30:51.945193] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:25.956 00:30:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@971 -- # wait 1933240 00:15:26.522 00:30:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:26.522 00:30:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:26.522 00:30:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:26.522 00:30:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:26.522 00:30:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:26.522 00:30:52 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.522 00:30:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:26.522 00:30:52 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.429 00:30:54 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:28.430 00:15:28.430 real 0m19.682s 00:15:28.430 user 0m43.690s 00:15:28.430 sys 0m6.123s 00:15:28.430 00:30:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:28.430 00:30:54 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.430 ************************************ 00:15:28.430 END TEST nvmf_connect_stress 00:15:28.430 ************************************ 00:15:28.430 00:30:54 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:28.430 00:30:54 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:28.430 00:30:54 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:28.430 00:30:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:28.430 ************************************ 00:15:28.430 START TEST nvmf_fused_ordering 00:15:28.430 ************************************ 00:15:28.430 00:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:28.691 * Looking for test storage... 00:15:28.691 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:15:28.691 00:30:54 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:35.260 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:35.260 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.260 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:35.261 Found net devices under 0000:27:00.0: cvl_0_0 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:35.261 Found net devices under 0000:27:00.1: cvl_0_1 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:35.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:15:35.261 00:15:35.261 --- 10.0.0.2 ping statistics --- 00:15:35.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.261 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:15:35.261 00:15:35.261 --- 10.0.0.1 ping statistics --- 00:15:35.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.261 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1939687 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1939687 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@828 -- # '[' -z 1939687 ']' 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.261 00:31:00 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:35.261 [2024-05-15 00:31:00.767065] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:15:35.261 [2024-05-15 00:31:00.767197] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.261 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.261 [2024-05-15 00:31:00.930764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.261 [2024-05-15 00:31:01.088278] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.261 [2024-05-15 00:31:01.088340] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.261 [2024-05-15 00:31:01.088356] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.261 [2024-05-15 00:31:01.088373] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.261 [2024-05-15 00:31:01.088386] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.261 [2024-05-15 00:31:01.088442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.521 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:35.521 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@861 -- # return 0 00:15:35.521 00:31:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:35.521 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.522 [2024-05-15 00:31:01.521917] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.522 [2024-05-15 00:31:01.537824] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:35.522 [2024-05-15 00:31:01.538202] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.522 NULL1 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:35.522 00:31:01 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:35.522 [2024-05-15 00:31:01.611181] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:15:35.522 [2024-05-15 00:31:01.611276] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1939849 ] 00:15:35.781 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.350 Attached to nqn.2016-06.io.spdk:cnode1 00:15:36.350 Namespace ID: 1 size: 1GB 00:15:36.350 fused_ordering(0) 00:15:36.350 fused_ordering(1) 00:15:36.350 fused_ordering(2) 00:15:36.350 fused_ordering(3) 00:15:36.350 fused_ordering(4) 00:15:36.350 fused_ordering(5) 00:15:36.350 fused_ordering(6) 00:15:36.350 fused_ordering(7) 00:15:36.350 fused_ordering(8) 00:15:36.350 fused_ordering(9) 00:15:36.350 fused_ordering(10) 00:15:36.350 fused_ordering(11) 00:15:36.350 fused_ordering(12) 00:15:36.350 fused_ordering(13) 00:15:36.350 fused_ordering(14) 00:15:36.350 fused_ordering(15) 00:15:36.350 fused_ordering(16) 00:15:36.350 fused_ordering(17) 00:15:36.350 fused_ordering(18) 00:15:36.350 fused_ordering(19) 00:15:36.350 fused_ordering(20) 00:15:36.350 fused_ordering(21) 00:15:36.350 fused_ordering(22) 00:15:36.350 fused_ordering(23) 00:15:36.350 fused_ordering(24) 00:15:36.350 fused_ordering(25) 00:15:36.350 fused_ordering(26) 00:15:36.350 fused_ordering(27) 00:15:36.350 fused_ordering(28) 00:15:36.350 fused_ordering(29) 00:15:36.350 fused_ordering(30) 00:15:36.350 fused_ordering(31) 00:15:36.350 fused_ordering(32) 00:15:36.350 fused_ordering(33) 00:15:36.350 fused_ordering(34) 00:15:36.350 fused_ordering(35) 00:15:36.350 fused_ordering(36) 00:15:36.350 fused_ordering(37) 00:15:36.350 fused_ordering(38) 00:15:36.350 fused_ordering(39) 00:15:36.350 fused_ordering(40) 00:15:36.350 fused_ordering(41) 00:15:36.350 fused_ordering(42) 00:15:36.350 fused_ordering(43) 00:15:36.350 fused_ordering(44) 00:15:36.350 fused_ordering(45) 00:15:36.350 fused_ordering(46) 00:15:36.350 fused_ordering(47) 00:15:36.350 fused_ordering(48) 00:15:36.350 fused_ordering(49) 00:15:36.350 fused_ordering(50) 00:15:36.350 fused_ordering(51) 00:15:36.350 fused_ordering(52) 00:15:36.350 fused_ordering(53) 00:15:36.350 fused_ordering(54) 00:15:36.350 fused_ordering(55) 00:15:36.350 fused_ordering(56) 00:15:36.350 fused_ordering(57) 00:15:36.350 fused_ordering(58) 00:15:36.350 fused_ordering(59) 00:15:36.350 fused_ordering(60) 00:15:36.350 fused_ordering(61) 00:15:36.350 fused_ordering(62) 00:15:36.350 fused_ordering(63) 00:15:36.350 fused_ordering(64) 00:15:36.350 fused_ordering(65) 00:15:36.350 fused_ordering(66) 00:15:36.350 fused_ordering(67) 00:15:36.350 fused_ordering(68) 00:15:36.350 fused_ordering(69) 00:15:36.350 fused_ordering(70) 00:15:36.350 fused_ordering(71) 00:15:36.350 fused_ordering(72) 00:15:36.350 fused_ordering(73) 00:15:36.350 fused_ordering(74) 00:15:36.350 fused_ordering(75) 00:15:36.350 fused_ordering(76) 00:15:36.350 fused_ordering(77) 00:15:36.350 fused_ordering(78) 00:15:36.350 fused_ordering(79) 00:15:36.350 fused_ordering(80) 00:15:36.350 fused_ordering(81) 00:15:36.350 fused_ordering(82) 00:15:36.350 fused_ordering(83) 00:15:36.350 fused_ordering(84) 00:15:36.350 fused_ordering(85) 00:15:36.350 fused_ordering(86) 00:15:36.350 fused_ordering(87) 00:15:36.350 fused_ordering(88) 00:15:36.350 fused_ordering(89) 00:15:36.350 fused_ordering(90) 00:15:36.350 fused_ordering(91) 00:15:36.350 fused_ordering(92) 00:15:36.350 fused_ordering(93) 00:15:36.350 fused_ordering(94) 00:15:36.350 fused_ordering(95) 00:15:36.350 fused_ordering(96) 00:15:36.350 fused_ordering(97) 00:15:36.350 fused_ordering(98) 00:15:36.350 fused_ordering(99) 00:15:36.350 fused_ordering(100) 00:15:36.350 fused_ordering(101) 00:15:36.350 fused_ordering(102) 00:15:36.350 fused_ordering(103) 00:15:36.350 fused_ordering(104) 00:15:36.350 fused_ordering(105) 00:15:36.350 fused_ordering(106) 00:15:36.350 fused_ordering(107) 00:15:36.350 fused_ordering(108) 00:15:36.350 fused_ordering(109) 00:15:36.350 fused_ordering(110) 00:15:36.350 fused_ordering(111) 00:15:36.350 fused_ordering(112) 00:15:36.350 fused_ordering(113) 00:15:36.350 fused_ordering(114) 00:15:36.350 fused_ordering(115) 00:15:36.350 fused_ordering(116) 00:15:36.350 fused_ordering(117) 00:15:36.350 fused_ordering(118) 00:15:36.350 fused_ordering(119) 00:15:36.350 fused_ordering(120) 00:15:36.350 fused_ordering(121) 00:15:36.350 fused_ordering(122) 00:15:36.350 fused_ordering(123) 00:15:36.350 fused_ordering(124) 00:15:36.350 fused_ordering(125) 00:15:36.350 fused_ordering(126) 00:15:36.350 fused_ordering(127) 00:15:36.350 fused_ordering(128) 00:15:36.350 fused_ordering(129) 00:15:36.350 fused_ordering(130) 00:15:36.350 fused_ordering(131) 00:15:36.350 fused_ordering(132) 00:15:36.350 fused_ordering(133) 00:15:36.350 fused_ordering(134) 00:15:36.350 fused_ordering(135) 00:15:36.350 fused_ordering(136) 00:15:36.350 fused_ordering(137) 00:15:36.350 fused_ordering(138) 00:15:36.350 fused_ordering(139) 00:15:36.350 fused_ordering(140) 00:15:36.350 fused_ordering(141) 00:15:36.350 fused_ordering(142) 00:15:36.350 fused_ordering(143) 00:15:36.350 fused_ordering(144) 00:15:36.350 fused_ordering(145) 00:15:36.350 fused_ordering(146) 00:15:36.350 fused_ordering(147) 00:15:36.350 fused_ordering(148) 00:15:36.350 fused_ordering(149) 00:15:36.350 fused_ordering(150) 00:15:36.350 fused_ordering(151) 00:15:36.350 fused_ordering(152) 00:15:36.350 fused_ordering(153) 00:15:36.350 fused_ordering(154) 00:15:36.350 fused_ordering(155) 00:15:36.350 fused_ordering(156) 00:15:36.350 fused_ordering(157) 00:15:36.350 fused_ordering(158) 00:15:36.350 fused_ordering(159) 00:15:36.350 fused_ordering(160) 00:15:36.350 fused_ordering(161) 00:15:36.350 fused_ordering(162) 00:15:36.350 fused_ordering(163) 00:15:36.350 fused_ordering(164) 00:15:36.350 fused_ordering(165) 00:15:36.350 fused_ordering(166) 00:15:36.351 fused_ordering(167) 00:15:36.351 fused_ordering(168) 00:15:36.351 fused_ordering(169) 00:15:36.351 fused_ordering(170) 00:15:36.351 fused_ordering(171) 00:15:36.351 fused_ordering(172) 00:15:36.351 fused_ordering(173) 00:15:36.351 fused_ordering(174) 00:15:36.351 fused_ordering(175) 00:15:36.351 fused_ordering(176) 00:15:36.351 fused_ordering(177) 00:15:36.351 fused_ordering(178) 00:15:36.351 fused_ordering(179) 00:15:36.351 fused_ordering(180) 00:15:36.351 fused_ordering(181) 00:15:36.351 fused_ordering(182) 00:15:36.351 fused_ordering(183) 00:15:36.351 fused_ordering(184) 00:15:36.351 fused_ordering(185) 00:15:36.351 fused_ordering(186) 00:15:36.351 fused_ordering(187) 00:15:36.351 fused_ordering(188) 00:15:36.351 fused_ordering(189) 00:15:36.351 fused_ordering(190) 00:15:36.351 fused_ordering(191) 00:15:36.351 fused_ordering(192) 00:15:36.351 fused_ordering(193) 00:15:36.351 fused_ordering(194) 00:15:36.351 fused_ordering(195) 00:15:36.351 fused_ordering(196) 00:15:36.351 fused_ordering(197) 00:15:36.351 fused_ordering(198) 00:15:36.351 fused_ordering(199) 00:15:36.351 fused_ordering(200) 00:15:36.351 fused_ordering(201) 00:15:36.351 fused_ordering(202) 00:15:36.351 fused_ordering(203) 00:15:36.351 fused_ordering(204) 00:15:36.351 fused_ordering(205) 00:15:36.611 fused_ordering(206) 00:15:36.611 fused_ordering(207) 00:15:36.611 fused_ordering(208) 00:15:36.611 fused_ordering(209) 00:15:36.611 fused_ordering(210) 00:15:36.611 fused_ordering(211) 00:15:36.611 fused_ordering(212) 00:15:36.611 fused_ordering(213) 00:15:36.611 fused_ordering(214) 00:15:36.611 fused_ordering(215) 00:15:36.611 fused_ordering(216) 00:15:36.611 fused_ordering(217) 00:15:36.611 fused_ordering(218) 00:15:36.611 fused_ordering(219) 00:15:36.611 fused_ordering(220) 00:15:36.611 fused_ordering(221) 00:15:36.611 fused_ordering(222) 00:15:36.611 fused_ordering(223) 00:15:36.611 fused_ordering(224) 00:15:36.611 fused_ordering(225) 00:15:36.611 fused_ordering(226) 00:15:36.611 fused_ordering(227) 00:15:36.611 fused_ordering(228) 00:15:36.611 fused_ordering(229) 00:15:36.611 fused_ordering(230) 00:15:36.611 fused_ordering(231) 00:15:36.611 fused_ordering(232) 00:15:36.611 fused_ordering(233) 00:15:36.611 fused_ordering(234) 00:15:36.611 fused_ordering(235) 00:15:36.611 fused_ordering(236) 00:15:36.611 fused_ordering(237) 00:15:36.611 fused_ordering(238) 00:15:36.611 fused_ordering(239) 00:15:36.611 fused_ordering(240) 00:15:36.611 fused_ordering(241) 00:15:36.611 fused_ordering(242) 00:15:36.611 fused_ordering(243) 00:15:36.611 fused_ordering(244) 00:15:36.611 fused_ordering(245) 00:15:36.611 fused_ordering(246) 00:15:36.611 fused_ordering(247) 00:15:36.611 fused_ordering(248) 00:15:36.611 fused_ordering(249) 00:15:36.611 fused_ordering(250) 00:15:36.611 fused_ordering(251) 00:15:36.611 fused_ordering(252) 00:15:36.611 fused_ordering(253) 00:15:36.611 fused_ordering(254) 00:15:36.611 fused_ordering(255) 00:15:36.611 fused_ordering(256) 00:15:36.611 fused_ordering(257) 00:15:36.611 fused_ordering(258) 00:15:36.611 fused_ordering(259) 00:15:36.611 fused_ordering(260) 00:15:36.611 fused_ordering(261) 00:15:36.611 fused_ordering(262) 00:15:36.611 fused_ordering(263) 00:15:36.611 fused_ordering(264) 00:15:36.611 fused_ordering(265) 00:15:36.611 fused_ordering(266) 00:15:36.611 fused_ordering(267) 00:15:36.611 fused_ordering(268) 00:15:36.611 fused_ordering(269) 00:15:36.611 fused_ordering(270) 00:15:36.611 fused_ordering(271) 00:15:36.611 fused_ordering(272) 00:15:36.611 fused_ordering(273) 00:15:36.611 fused_ordering(274) 00:15:36.611 fused_ordering(275) 00:15:36.611 fused_ordering(276) 00:15:36.612 fused_ordering(277) 00:15:36.612 fused_ordering(278) 00:15:36.612 fused_ordering(279) 00:15:36.612 fused_ordering(280) 00:15:36.612 fused_ordering(281) 00:15:36.612 fused_ordering(282) 00:15:36.612 fused_ordering(283) 00:15:36.612 fused_ordering(284) 00:15:36.612 fused_ordering(285) 00:15:36.612 fused_ordering(286) 00:15:36.612 fused_ordering(287) 00:15:36.612 fused_ordering(288) 00:15:36.612 fused_ordering(289) 00:15:36.612 fused_ordering(290) 00:15:36.612 fused_ordering(291) 00:15:36.612 fused_ordering(292) 00:15:36.612 fused_ordering(293) 00:15:36.612 fused_ordering(294) 00:15:36.612 fused_ordering(295) 00:15:36.612 fused_ordering(296) 00:15:36.612 fused_ordering(297) 00:15:36.612 fused_ordering(298) 00:15:36.612 fused_ordering(299) 00:15:36.612 fused_ordering(300) 00:15:36.612 fused_ordering(301) 00:15:36.612 fused_ordering(302) 00:15:36.612 fused_ordering(303) 00:15:36.612 fused_ordering(304) 00:15:36.612 fused_ordering(305) 00:15:36.612 fused_ordering(306) 00:15:36.612 fused_ordering(307) 00:15:36.612 fused_ordering(308) 00:15:36.612 fused_ordering(309) 00:15:36.612 fused_ordering(310) 00:15:36.612 fused_ordering(311) 00:15:36.612 fused_ordering(312) 00:15:36.612 fused_ordering(313) 00:15:36.612 fused_ordering(314) 00:15:36.612 fused_ordering(315) 00:15:36.612 fused_ordering(316) 00:15:36.612 fused_ordering(317) 00:15:36.612 fused_ordering(318) 00:15:36.612 fused_ordering(319) 00:15:36.612 fused_ordering(320) 00:15:36.612 fused_ordering(321) 00:15:36.612 fused_ordering(322) 00:15:36.612 fused_ordering(323) 00:15:36.612 fused_ordering(324) 00:15:36.612 fused_ordering(325) 00:15:36.612 fused_ordering(326) 00:15:36.612 fused_ordering(327) 00:15:36.612 fused_ordering(328) 00:15:36.612 fused_ordering(329) 00:15:36.612 fused_ordering(330) 00:15:36.612 fused_ordering(331) 00:15:36.612 fused_ordering(332) 00:15:36.612 fused_ordering(333) 00:15:36.612 fused_ordering(334) 00:15:36.612 fused_ordering(335) 00:15:36.612 fused_ordering(336) 00:15:36.612 fused_ordering(337) 00:15:36.612 fused_ordering(338) 00:15:36.612 fused_ordering(339) 00:15:36.612 fused_ordering(340) 00:15:36.612 fused_ordering(341) 00:15:36.612 fused_ordering(342) 00:15:36.612 fused_ordering(343) 00:15:36.612 fused_ordering(344) 00:15:36.612 fused_ordering(345) 00:15:36.612 fused_ordering(346) 00:15:36.612 fused_ordering(347) 00:15:36.612 fused_ordering(348) 00:15:36.612 fused_ordering(349) 00:15:36.612 fused_ordering(350) 00:15:36.612 fused_ordering(351) 00:15:36.612 fused_ordering(352) 00:15:36.612 fused_ordering(353) 00:15:36.612 fused_ordering(354) 00:15:36.612 fused_ordering(355) 00:15:36.612 fused_ordering(356) 00:15:36.612 fused_ordering(357) 00:15:36.612 fused_ordering(358) 00:15:36.612 fused_ordering(359) 00:15:36.612 fused_ordering(360) 00:15:36.612 fused_ordering(361) 00:15:36.612 fused_ordering(362) 00:15:36.612 fused_ordering(363) 00:15:36.612 fused_ordering(364) 00:15:36.612 fused_ordering(365) 00:15:36.612 fused_ordering(366) 00:15:36.612 fused_ordering(367) 00:15:36.612 fused_ordering(368) 00:15:36.612 fused_ordering(369) 00:15:36.612 fused_ordering(370) 00:15:36.612 fused_ordering(371) 00:15:36.612 fused_ordering(372) 00:15:36.612 fused_ordering(373) 00:15:36.612 fused_ordering(374) 00:15:36.612 fused_ordering(375) 00:15:36.612 fused_ordering(376) 00:15:36.612 fused_ordering(377) 00:15:36.612 fused_ordering(378) 00:15:36.612 fused_ordering(379) 00:15:36.612 fused_ordering(380) 00:15:36.612 fused_ordering(381) 00:15:36.612 fused_ordering(382) 00:15:36.612 fused_ordering(383) 00:15:36.612 fused_ordering(384) 00:15:36.612 fused_ordering(385) 00:15:36.612 fused_ordering(386) 00:15:36.612 fused_ordering(387) 00:15:36.612 fused_ordering(388) 00:15:36.612 fused_ordering(389) 00:15:36.612 fused_ordering(390) 00:15:36.612 fused_ordering(391) 00:15:36.612 fused_ordering(392) 00:15:36.612 fused_ordering(393) 00:15:36.612 fused_ordering(394) 00:15:36.612 fused_ordering(395) 00:15:36.612 fused_ordering(396) 00:15:36.612 fused_ordering(397) 00:15:36.612 fused_ordering(398) 00:15:36.612 fused_ordering(399) 00:15:36.612 fused_ordering(400) 00:15:36.612 fused_ordering(401) 00:15:36.612 fused_ordering(402) 00:15:36.612 fused_ordering(403) 00:15:36.612 fused_ordering(404) 00:15:36.612 fused_ordering(405) 00:15:36.612 fused_ordering(406) 00:15:36.612 fused_ordering(407) 00:15:36.612 fused_ordering(408) 00:15:36.612 fused_ordering(409) 00:15:36.612 fused_ordering(410) 00:15:36.872 fused_ordering(411) 00:15:36.872 fused_ordering(412) 00:15:36.872 fused_ordering(413) 00:15:36.872 fused_ordering(414) 00:15:36.872 fused_ordering(415) 00:15:36.872 fused_ordering(416) 00:15:36.872 fused_ordering(417) 00:15:36.872 fused_ordering(418) 00:15:36.872 fused_ordering(419) 00:15:36.872 fused_ordering(420) 00:15:36.872 fused_ordering(421) 00:15:36.872 fused_ordering(422) 00:15:36.872 fused_ordering(423) 00:15:36.872 fused_ordering(424) 00:15:36.872 fused_ordering(425) 00:15:36.872 fused_ordering(426) 00:15:36.872 fused_ordering(427) 00:15:36.872 fused_ordering(428) 00:15:36.872 fused_ordering(429) 00:15:36.872 fused_ordering(430) 00:15:36.872 fused_ordering(431) 00:15:36.872 fused_ordering(432) 00:15:36.872 fused_ordering(433) 00:15:36.872 fused_ordering(434) 00:15:36.872 fused_ordering(435) 00:15:36.872 fused_ordering(436) 00:15:36.873 fused_ordering(437) 00:15:36.873 fused_ordering(438) 00:15:36.873 fused_ordering(439) 00:15:36.873 fused_ordering(440) 00:15:36.873 fused_ordering(441) 00:15:36.873 fused_ordering(442) 00:15:36.873 fused_ordering(443) 00:15:36.873 fused_ordering(444) 00:15:36.873 fused_ordering(445) 00:15:36.873 fused_ordering(446) 00:15:36.873 fused_ordering(447) 00:15:36.873 fused_ordering(448) 00:15:36.873 fused_ordering(449) 00:15:36.873 fused_ordering(450) 00:15:36.873 fused_ordering(451) 00:15:36.873 fused_ordering(452) 00:15:36.873 fused_ordering(453) 00:15:36.873 fused_ordering(454) 00:15:36.873 fused_ordering(455) 00:15:36.873 fused_ordering(456) 00:15:36.873 fused_ordering(457) 00:15:36.873 fused_ordering(458) 00:15:36.873 fused_ordering(459) 00:15:36.873 fused_ordering(460) 00:15:36.873 fused_ordering(461) 00:15:36.873 fused_ordering(462) 00:15:36.873 fused_ordering(463) 00:15:36.873 fused_ordering(464) 00:15:36.873 fused_ordering(465) 00:15:36.873 fused_ordering(466) 00:15:36.873 fused_ordering(467) 00:15:36.873 fused_ordering(468) 00:15:36.873 fused_ordering(469) 00:15:36.873 fused_ordering(470) 00:15:36.873 fused_ordering(471) 00:15:36.873 fused_ordering(472) 00:15:36.873 fused_ordering(473) 00:15:36.873 fused_ordering(474) 00:15:36.873 fused_ordering(475) 00:15:36.873 fused_ordering(476) 00:15:36.873 fused_ordering(477) 00:15:36.873 fused_ordering(478) 00:15:36.873 fused_ordering(479) 00:15:36.873 fused_ordering(480) 00:15:36.873 fused_ordering(481) 00:15:36.873 fused_ordering(482) 00:15:36.873 fused_ordering(483) 00:15:36.873 fused_ordering(484) 00:15:36.873 fused_ordering(485) 00:15:36.873 fused_ordering(486) 00:15:36.873 fused_ordering(487) 00:15:36.873 fused_ordering(488) 00:15:36.873 fused_ordering(489) 00:15:36.873 fused_ordering(490) 00:15:36.873 fused_ordering(491) 00:15:36.873 fused_ordering(492) 00:15:36.873 fused_ordering(493) 00:15:36.873 fused_ordering(494) 00:15:36.873 fused_ordering(495) 00:15:36.873 fused_ordering(496) 00:15:36.873 fused_ordering(497) 00:15:36.873 fused_ordering(498) 00:15:36.873 fused_ordering(499) 00:15:36.873 fused_ordering(500) 00:15:36.873 fused_ordering(501) 00:15:36.873 fused_ordering(502) 00:15:36.873 fused_ordering(503) 00:15:36.873 fused_ordering(504) 00:15:36.873 fused_ordering(505) 00:15:36.873 fused_ordering(506) 00:15:36.873 fused_ordering(507) 00:15:36.873 fused_ordering(508) 00:15:36.873 fused_ordering(509) 00:15:36.873 fused_ordering(510) 00:15:36.873 fused_ordering(511) 00:15:36.873 fused_ordering(512) 00:15:36.873 fused_ordering(513) 00:15:36.873 fused_ordering(514) 00:15:36.873 fused_ordering(515) 00:15:36.873 fused_ordering(516) 00:15:36.873 fused_ordering(517) 00:15:36.873 fused_ordering(518) 00:15:36.873 fused_ordering(519) 00:15:36.873 fused_ordering(520) 00:15:36.873 fused_ordering(521) 00:15:36.873 fused_ordering(522) 00:15:36.873 fused_ordering(523) 00:15:36.873 fused_ordering(524) 00:15:36.873 fused_ordering(525) 00:15:36.873 fused_ordering(526) 00:15:36.873 fused_ordering(527) 00:15:36.873 fused_ordering(528) 00:15:36.873 fused_ordering(529) 00:15:36.873 fused_ordering(530) 00:15:36.873 fused_ordering(531) 00:15:36.873 fused_ordering(532) 00:15:36.873 fused_ordering(533) 00:15:36.873 fused_ordering(534) 00:15:36.873 fused_ordering(535) 00:15:36.873 fused_ordering(536) 00:15:36.873 fused_ordering(537) 00:15:36.873 fused_ordering(538) 00:15:36.873 fused_ordering(539) 00:15:36.873 fused_ordering(540) 00:15:36.873 fused_ordering(541) 00:15:36.873 fused_ordering(542) 00:15:36.873 fused_ordering(543) 00:15:36.873 fused_ordering(544) 00:15:36.873 fused_ordering(545) 00:15:36.873 fused_ordering(546) 00:15:36.873 fused_ordering(547) 00:15:36.873 fused_ordering(548) 00:15:36.873 fused_ordering(549) 00:15:36.873 fused_ordering(550) 00:15:36.873 fused_ordering(551) 00:15:36.873 fused_ordering(552) 00:15:36.873 fused_ordering(553) 00:15:36.873 fused_ordering(554) 00:15:36.873 fused_ordering(555) 00:15:36.873 fused_ordering(556) 00:15:36.873 fused_ordering(557) 00:15:36.873 fused_ordering(558) 00:15:36.873 fused_ordering(559) 00:15:36.873 fused_ordering(560) 00:15:36.873 fused_ordering(561) 00:15:36.873 fused_ordering(562) 00:15:36.873 fused_ordering(563) 00:15:36.873 fused_ordering(564) 00:15:36.873 fused_ordering(565) 00:15:36.873 fused_ordering(566) 00:15:36.873 fused_ordering(567) 00:15:36.873 fused_ordering(568) 00:15:36.873 fused_ordering(569) 00:15:36.873 fused_ordering(570) 00:15:36.873 fused_ordering(571) 00:15:36.873 fused_ordering(572) 00:15:36.873 fused_ordering(573) 00:15:36.873 fused_ordering(574) 00:15:36.873 fused_ordering(575) 00:15:36.873 fused_ordering(576) 00:15:36.873 fused_ordering(577) 00:15:36.873 fused_ordering(578) 00:15:36.873 fused_ordering(579) 00:15:36.873 fused_ordering(580) 00:15:36.873 fused_ordering(581) 00:15:36.873 fused_ordering(582) 00:15:36.873 fused_ordering(583) 00:15:36.873 fused_ordering(584) 00:15:36.873 fused_ordering(585) 00:15:36.873 fused_ordering(586) 00:15:36.873 fused_ordering(587) 00:15:36.873 fused_ordering(588) 00:15:36.873 fused_ordering(589) 00:15:36.873 fused_ordering(590) 00:15:36.873 fused_ordering(591) 00:15:36.873 fused_ordering(592) 00:15:36.873 fused_ordering(593) 00:15:36.873 fused_ordering(594) 00:15:36.873 fused_ordering(595) 00:15:36.873 fused_ordering(596) 00:15:36.873 fused_ordering(597) 00:15:36.873 fused_ordering(598) 00:15:36.873 fused_ordering(599) 00:15:36.873 fused_ordering(600) 00:15:36.873 fused_ordering(601) 00:15:36.873 fused_ordering(602) 00:15:36.873 fused_ordering(603) 00:15:36.873 fused_ordering(604) 00:15:36.873 fused_ordering(605) 00:15:36.873 fused_ordering(606) 00:15:36.873 fused_ordering(607) 00:15:36.873 fused_ordering(608) 00:15:36.873 fused_ordering(609) 00:15:36.873 fused_ordering(610) 00:15:36.873 fused_ordering(611) 00:15:36.873 fused_ordering(612) 00:15:36.873 fused_ordering(613) 00:15:36.873 fused_ordering(614) 00:15:36.873 fused_ordering(615) 00:15:37.134 fused_ordering(616) 00:15:37.134 fused_ordering(617) 00:15:37.134 fused_ordering(618) 00:15:37.134 fused_ordering(619) 00:15:37.134 fused_ordering(620) 00:15:37.134 fused_ordering(621) 00:15:37.134 fused_ordering(622) 00:15:37.134 fused_ordering(623) 00:15:37.134 fused_ordering(624) 00:15:37.134 fused_ordering(625) 00:15:37.134 fused_ordering(626) 00:15:37.134 fused_ordering(627) 00:15:37.134 fused_ordering(628) 00:15:37.134 fused_ordering(629) 00:15:37.134 fused_ordering(630) 00:15:37.134 fused_ordering(631) 00:15:37.134 fused_ordering(632) 00:15:37.134 fused_ordering(633) 00:15:37.134 fused_ordering(634) 00:15:37.134 fused_ordering(635) 00:15:37.134 fused_ordering(636) 00:15:37.134 fused_ordering(637) 00:15:37.134 fused_ordering(638) 00:15:37.134 fused_ordering(639) 00:15:37.134 fused_ordering(640) 00:15:37.134 fused_ordering(641) 00:15:37.134 fused_ordering(642) 00:15:37.134 fused_ordering(643) 00:15:37.134 fused_ordering(644) 00:15:37.134 fused_ordering(645) 00:15:37.134 fused_ordering(646) 00:15:37.134 fused_ordering(647) 00:15:37.134 fused_ordering(648) 00:15:37.134 fused_ordering(649) 00:15:37.134 fused_ordering(650) 00:15:37.134 fused_ordering(651) 00:15:37.134 fused_ordering(652) 00:15:37.134 fused_ordering(653) 00:15:37.134 fused_ordering(654) 00:15:37.134 fused_ordering(655) 00:15:37.134 fused_ordering(656) 00:15:37.134 fused_ordering(657) 00:15:37.134 fused_ordering(658) 00:15:37.134 fused_ordering(659) 00:15:37.134 fused_ordering(660) 00:15:37.134 fused_ordering(661) 00:15:37.134 fused_ordering(662) 00:15:37.134 fused_ordering(663) 00:15:37.134 fused_ordering(664) 00:15:37.134 fused_ordering(665) 00:15:37.134 fused_ordering(666) 00:15:37.134 fused_ordering(667) 00:15:37.134 fused_ordering(668) 00:15:37.134 fused_ordering(669) 00:15:37.134 fused_ordering(670) 00:15:37.134 fused_ordering(671) 00:15:37.134 fused_ordering(672) 00:15:37.134 fused_ordering(673) 00:15:37.134 fused_ordering(674) 00:15:37.134 fused_ordering(675) 00:15:37.134 fused_ordering(676) 00:15:37.134 fused_ordering(677) 00:15:37.134 fused_ordering(678) 00:15:37.134 fused_ordering(679) 00:15:37.134 fused_ordering(680) 00:15:37.134 fused_ordering(681) 00:15:37.134 fused_ordering(682) 00:15:37.134 fused_ordering(683) 00:15:37.134 fused_ordering(684) 00:15:37.134 fused_ordering(685) 00:15:37.134 fused_ordering(686) 00:15:37.134 fused_ordering(687) 00:15:37.134 fused_ordering(688) 00:15:37.134 fused_ordering(689) 00:15:37.134 fused_ordering(690) 00:15:37.134 fused_ordering(691) 00:15:37.134 fused_ordering(692) 00:15:37.134 fused_ordering(693) 00:15:37.134 fused_ordering(694) 00:15:37.134 fused_ordering(695) 00:15:37.134 fused_ordering(696) 00:15:37.135 fused_ordering(697) 00:15:37.135 fused_ordering(698) 00:15:37.135 fused_ordering(699) 00:15:37.135 fused_ordering(700) 00:15:37.135 fused_ordering(701) 00:15:37.135 fused_ordering(702) 00:15:37.135 fused_ordering(703) 00:15:37.135 fused_ordering(704) 00:15:37.135 fused_ordering(705) 00:15:37.135 fused_ordering(706) 00:15:37.135 fused_ordering(707) 00:15:37.135 fused_ordering(708) 00:15:37.135 fused_ordering(709) 00:15:37.135 fused_ordering(710) 00:15:37.135 fused_ordering(711) 00:15:37.135 fused_ordering(712) 00:15:37.135 fused_ordering(713) 00:15:37.135 fused_ordering(714) 00:15:37.135 fused_ordering(715) 00:15:37.135 fused_ordering(716) 00:15:37.135 fused_ordering(717) 00:15:37.135 fused_ordering(718) 00:15:37.135 fused_ordering(719) 00:15:37.135 fused_ordering(720) 00:15:37.135 fused_ordering(721) 00:15:37.135 fused_ordering(722) 00:15:37.135 fused_ordering(723) 00:15:37.135 fused_ordering(724) 00:15:37.135 fused_ordering(725) 00:15:37.135 fused_ordering(726) 00:15:37.135 fused_ordering(727) 00:15:37.135 fused_ordering(728) 00:15:37.135 fused_ordering(729) 00:15:37.135 fused_ordering(730) 00:15:37.135 fused_ordering(731) 00:15:37.135 fused_ordering(732) 00:15:37.135 fused_ordering(733) 00:15:37.135 fused_ordering(734) 00:15:37.135 fused_ordering(735) 00:15:37.135 fused_ordering(736) 00:15:37.135 fused_ordering(737) 00:15:37.135 fused_ordering(738) 00:15:37.135 fused_ordering(739) 00:15:37.135 fused_ordering(740) 00:15:37.135 fused_ordering(741) 00:15:37.135 fused_ordering(742) 00:15:37.135 fused_ordering(743) 00:15:37.135 fused_ordering(744) 00:15:37.135 fused_ordering(745) 00:15:37.135 fused_ordering(746) 00:15:37.135 fused_ordering(747) 00:15:37.135 fused_ordering(748) 00:15:37.135 fused_ordering(749) 00:15:37.135 fused_ordering(750) 00:15:37.135 fused_ordering(751) 00:15:37.135 fused_ordering(752) 00:15:37.135 fused_ordering(753) 00:15:37.135 fused_ordering(754) 00:15:37.135 fused_ordering(755) 00:15:37.135 fused_ordering(756) 00:15:37.135 fused_ordering(757) 00:15:37.135 fused_ordering(758) 00:15:37.135 fused_ordering(759) 00:15:37.135 fused_ordering(760) 00:15:37.135 fused_ordering(761) 00:15:37.135 fused_ordering(762) 00:15:37.135 fused_ordering(763) 00:15:37.135 fused_ordering(764) 00:15:37.135 fused_ordering(765) 00:15:37.135 fused_ordering(766) 00:15:37.135 fused_ordering(767) 00:15:37.135 fused_ordering(768) 00:15:37.135 fused_ordering(769) 00:15:37.135 fused_ordering(770) 00:15:37.135 fused_ordering(771) 00:15:37.135 fused_ordering(772) 00:15:37.135 fused_ordering(773) 00:15:37.135 fused_ordering(774) 00:15:37.135 fused_ordering(775) 00:15:37.135 fused_ordering(776) 00:15:37.135 fused_ordering(777) 00:15:37.135 fused_ordering(778) 00:15:37.135 fused_ordering(779) 00:15:37.135 fused_ordering(780) 00:15:37.135 fused_ordering(781) 00:15:37.135 fused_ordering(782) 00:15:37.135 fused_ordering(783) 00:15:37.135 fused_ordering(784) 00:15:37.135 fused_ordering(785) 00:15:37.135 fused_ordering(786) 00:15:37.135 fused_ordering(787) 00:15:37.135 fused_ordering(788) 00:15:37.135 fused_ordering(789) 00:15:37.135 fused_ordering(790) 00:15:37.135 fused_ordering(791) 00:15:37.135 fused_ordering(792) 00:15:37.135 fused_ordering(793) 00:15:37.135 fused_ordering(794) 00:15:37.135 fused_ordering(795) 00:15:37.135 fused_ordering(796) 00:15:37.135 fused_ordering(797) 00:15:37.135 fused_ordering(798) 00:15:37.135 fused_ordering(799) 00:15:37.135 fused_ordering(800) 00:15:37.135 fused_ordering(801) 00:15:37.135 fused_ordering(802) 00:15:37.135 fused_ordering(803) 00:15:37.135 fused_ordering(804) 00:15:37.135 fused_ordering(805) 00:15:37.135 fused_ordering(806) 00:15:37.135 fused_ordering(807) 00:15:37.135 fused_ordering(808) 00:15:37.135 fused_ordering(809) 00:15:37.135 fused_ordering(810) 00:15:37.135 fused_ordering(811) 00:15:37.135 fused_ordering(812) 00:15:37.135 fused_ordering(813) 00:15:37.135 fused_ordering(814) 00:15:37.135 fused_ordering(815) 00:15:37.135 fused_ordering(816) 00:15:37.135 fused_ordering(817) 00:15:37.135 fused_ordering(818) 00:15:37.135 fused_ordering(819) 00:15:37.135 fused_ordering(820) 00:15:37.705 fused_ordering(821) 00:15:37.705 fused_ordering(822) 00:15:37.705 fused_ordering(823) 00:15:37.705 fused_ordering(824) 00:15:37.705 fused_ordering(825) 00:15:37.705 fused_ordering(826) 00:15:37.705 fused_ordering(827) 00:15:37.705 fused_ordering(828) 00:15:37.705 fused_ordering(829) 00:15:37.705 fused_ordering(830) 00:15:37.705 fused_ordering(831) 00:15:37.705 fused_ordering(832) 00:15:37.705 fused_ordering(833) 00:15:37.705 fused_ordering(834) 00:15:37.705 fused_ordering(835) 00:15:37.705 fused_ordering(836) 00:15:37.705 fused_ordering(837) 00:15:37.705 fused_ordering(838) 00:15:37.705 fused_ordering(839) 00:15:37.705 fused_ordering(840) 00:15:37.705 fused_ordering(841) 00:15:37.705 fused_ordering(842) 00:15:37.705 fused_ordering(843) 00:15:37.705 fused_ordering(844) 00:15:37.705 fused_ordering(845) 00:15:37.705 fused_ordering(846) 00:15:37.705 fused_ordering(847) 00:15:37.705 fused_ordering(848) 00:15:37.705 fused_ordering(849) 00:15:37.705 fused_ordering(850) 00:15:37.705 fused_ordering(851) 00:15:37.705 fused_ordering(852) 00:15:37.705 fused_ordering(853) 00:15:37.705 fused_ordering(854) 00:15:37.705 fused_ordering(855) 00:15:37.705 fused_ordering(856) 00:15:37.705 fused_ordering(857) 00:15:37.705 fused_ordering(858) 00:15:37.705 fused_ordering(859) 00:15:37.705 fused_ordering(860) 00:15:37.705 fused_ordering(861) 00:15:37.705 fused_ordering(862) 00:15:37.705 fused_ordering(863) 00:15:37.705 fused_ordering(864) 00:15:37.705 fused_ordering(865) 00:15:37.705 fused_ordering(866) 00:15:37.705 fused_ordering(867) 00:15:37.705 fused_ordering(868) 00:15:37.705 fused_ordering(869) 00:15:37.705 fused_ordering(870) 00:15:37.705 fused_ordering(871) 00:15:37.705 fused_ordering(872) 00:15:37.705 fused_ordering(873) 00:15:37.705 fused_ordering(874) 00:15:37.705 fused_ordering(875) 00:15:37.705 fused_ordering(876) 00:15:37.705 fused_ordering(877) 00:15:37.705 fused_ordering(878) 00:15:37.705 fused_ordering(879) 00:15:37.705 fused_ordering(880) 00:15:37.705 fused_ordering(881) 00:15:37.705 fused_ordering(882) 00:15:37.705 fused_ordering(883) 00:15:37.705 fused_ordering(884) 00:15:37.705 fused_ordering(885) 00:15:37.705 fused_ordering(886) 00:15:37.705 fused_ordering(887) 00:15:37.705 fused_ordering(888) 00:15:37.705 fused_ordering(889) 00:15:37.705 fused_ordering(890) 00:15:37.705 fused_ordering(891) 00:15:37.705 fused_ordering(892) 00:15:37.705 fused_ordering(893) 00:15:37.705 fused_ordering(894) 00:15:37.705 fused_ordering(895) 00:15:37.705 fused_ordering(896) 00:15:37.705 fused_ordering(897) 00:15:37.705 fused_ordering(898) 00:15:37.705 fused_ordering(899) 00:15:37.705 fused_ordering(900) 00:15:37.705 fused_ordering(901) 00:15:37.705 fused_ordering(902) 00:15:37.705 fused_ordering(903) 00:15:37.705 fused_ordering(904) 00:15:37.705 fused_ordering(905) 00:15:37.705 fused_ordering(906) 00:15:37.705 fused_ordering(907) 00:15:37.705 fused_ordering(908) 00:15:37.705 fused_ordering(909) 00:15:37.705 fused_ordering(910) 00:15:37.705 fused_ordering(911) 00:15:37.705 fused_ordering(912) 00:15:37.705 fused_ordering(913) 00:15:37.705 fused_ordering(914) 00:15:37.705 fused_ordering(915) 00:15:37.705 fused_ordering(916) 00:15:37.705 fused_ordering(917) 00:15:37.705 fused_ordering(918) 00:15:37.705 fused_ordering(919) 00:15:37.705 fused_ordering(920) 00:15:37.705 fused_ordering(921) 00:15:37.705 fused_ordering(922) 00:15:37.705 fused_ordering(923) 00:15:37.705 fused_ordering(924) 00:15:37.705 fused_ordering(925) 00:15:37.705 fused_ordering(926) 00:15:37.705 fused_ordering(927) 00:15:37.705 fused_ordering(928) 00:15:37.705 fused_ordering(929) 00:15:37.705 fused_ordering(930) 00:15:37.705 fused_ordering(931) 00:15:37.705 fused_ordering(932) 00:15:37.705 fused_ordering(933) 00:15:37.705 fused_ordering(934) 00:15:37.705 fused_ordering(935) 00:15:37.705 fused_ordering(936) 00:15:37.705 fused_ordering(937) 00:15:37.705 fused_ordering(938) 00:15:37.705 fused_ordering(939) 00:15:37.705 fused_ordering(940) 00:15:37.705 fused_ordering(941) 00:15:37.705 fused_ordering(942) 00:15:37.705 fused_ordering(943) 00:15:37.705 fused_ordering(944) 00:15:37.705 fused_ordering(945) 00:15:37.705 fused_ordering(946) 00:15:37.705 fused_ordering(947) 00:15:37.705 fused_ordering(948) 00:15:37.705 fused_ordering(949) 00:15:37.705 fused_ordering(950) 00:15:37.705 fused_ordering(951) 00:15:37.705 fused_ordering(952) 00:15:37.705 fused_ordering(953) 00:15:37.705 fused_ordering(954) 00:15:37.705 fused_ordering(955) 00:15:37.705 fused_ordering(956) 00:15:37.705 fused_ordering(957) 00:15:37.705 fused_ordering(958) 00:15:37.705 fused_ordering(959) 00:15:37.705 fused_ordering(960) 00:15:37.705 fused_ordering(961) 00:15:37.705 fused_ordering(962) 00:15:37.705 fused_ordering(963) 00:15:37.705 fused_ordering(964) 00:15:37.705 fused_ordering(965) 00:15:37.705 fused_ordering(966) 00:15:37.705 fused_ordering(967) 00:15:37.705 fused_ordering(968) 00:15:37.705 fused_ordering(969) 00:15:37.705 fused_ordering(970) 00:15:37.705 fused_ordering(971) 00:15:37.705 fused_ordering(972) 00:15:37.705 fused_ordering(973) 00:15:37.705 fused_ordering(974) 00:15:37.705 fused_ordering(975) 00:15:37.705 fused_ordering(976) 00:15:37.705 fused_ordering(977) 00:15:37.705 fused_ordering(978) 00:15:37.705 fused_ordering(979) 00:15:37.705 fused_ordering(980) 00:15:37.705 fused_ordering(981) 00:15:37.705 fused_ordering(982) 00:15:37.705 fused_ordering(983) 00:15:37.705 fused_ordering(984) 00:15:37.705 fused_ordering(985) 00:15:37.705 fused_ordering(986) 00:15:37.705 fused_ordering(987) 00:15:37.705 fused_ordering(988) 00:15:37.705 fused_ordering(989) 00:15:37.705 fused_ordering(990) 00:15:37.705 fused_ordering(991) 00:15:37.705 fused_ordering(992) 00:15:37.705 fused_ordering(993) 00:15:37.705 fused_ordering(994) 00:15:37.705 fused_ordering(995) 00:15:37.706 fused_ordering(996) 00:15:37.706 fused_ordering(997) 00:15:37.706 fused_ordering(998) 00:15:37.706 fused_ordering(999) 00:15:37.706 fused_ordering(1000) 00:15:37.706 fused_ordering(1001) 00:15:37.706 fused_ordering(1002) 00:15:37.706 fused_ordering(1003) 00:15:37.706 fused_ordering(1004) 00:15:37.706 fused_ordering(1005) 00:15:37.706 fused_ordering(1006) 00:15:37.706 fused_ordering(1007) 00:15:37.706 fused_ordering(1008) 00:15:37.706 fused_ordering(1009) 00:15:37.706 fused_ordering(1010) 00:15:37.706 fused_ordering(1011) 00:15:37.706 fused_ordering(1012) 00:15:37.706 fused_ordering(1013) 00:15:37.706 fused_ordering(1014) 00:15:37.706 fused_ordering(1015) 00:15:37.706 fused_ordering(1016) 00:15:37.706 fused_ordering(1017) 00:15:37.706 fused_ordering(1018) 00:15:37.706 fused_ordering(1019) 00:15:37.706 fused_ordering(1020) 00:15:37.706 fused_ordering(1021) 00:15:37.706 fused_ordering(1022) 00:15:37.706 fused_ordering(1023) 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:37.706 rmmod nvme_tcp 00:15:37.706 rmmod nvme_fabrics 00:15:37.706 rmmod nvme_keyring 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1939687 ']' 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1939687 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@947 -- # '[' -z 1939687 ']' 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@951 -- # kill -0 1939687 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # uname 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1939687 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1939687' 00:15:37.706 killing process with pid 1939687 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # kill 1939687 00:15:37.706 [2024-05-15 00:31:03.698083] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:37.706 00:31:03 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # wait 1939687 00:15:38.275 00:31:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:38.275 00:31:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:38.275 00:31:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:38.275 00:31:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.276 00:31:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:38.276 00:31:04 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.276 00:31:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.276 00:31:04 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.185 00:31:06 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:40.185 00:15:40.185 real 0m11.688s 00:15:40.185 user 0m6.510s 00:15:40.185 sys 0m5.495s 00:15:40.185 00:31:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:40.185 00:31:06 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.185 ************************************ 00:15:40.185 END TEST nvmf_fused_ordering 00:15:40.185 ************************************ 00:15:40.185 00:31:06 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:40.185 00:31:06 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:40.185 00:31:06 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:40.185 00:31:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:40.185 ************************************ 00:15:40.185 START TEST nvmf_delete_subsystem 00:15:40.185 ************************************ 00:15:40.185 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:15:40.445 * Looking for test storage... 00:15:40.445 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.445 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:15:40.446 00:31:06 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:47.018 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.018 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:47.019 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:47.019 Found net devices under 0000:27:00.0: cvl_0_0 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:47.019 Found net devices under 0000:27:00.1: cvl_0_1 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:47.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:15:47.019 00:15:47.019 --- 10.0.0.2 ping statistics --- 00:15:47.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.019 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:15:47.019 00:15:47.019 --- 10.0.0.1 ping statistics --- 00:15:47.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.019 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@721 -- # xtrace_disable 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1944327 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1944327 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@828 -- # '[' -z 1944327 ']' 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local max_retries=100 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # xtrace_disable 00:15:47.019 00:31:12 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:47.019 [2024-05-15 00:31:12.697841] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:15:47.019 [2024-05-15 00:31:12.697971] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.019 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.019 [2024-05-15 00:31:12.836269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:47.019 [2024-05-15 00:31:12.942231] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.019 [2024-05-15 00:31:12.942281] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.019 [2024-05-15 00:31:12.942292] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.019 [2024-05-15 00:31:12.942302] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.019 [2024-05-15 00:31:12.942311] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.019 [2024-05-15 00:31:12.942386] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.019 [2024-05-15 00:31:12.942397] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:47.280 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:15:47.280 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@861 -- # return 0 00:15:47.280 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:47.280 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@727 -- # xtrace_disable 00:15:47.280 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:47.540 [2024-05-15 00:31:13.456930] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:47.540 [2024-05-15 00:31:13.472929] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:15:47.540 [2024-05-15 00:31:13.473212] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:47.540 NULL1 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:47.540 Delay0 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1944633 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:15:47.540 00:31:13 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:47.540 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.540 [2024-05-15 00:31:13.598177] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:49.442 00:31:15 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:49.442 00:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:49.442 00:31:15 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 starting I/O failed: -6 00:15:49.700 Write completed with error (sct=0, sc=8) 00:15:49.700 Write completed with error (sct=0, sc=8) 00:15:49.700 Write completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 starting I/O failed: -6 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Write completed with error (sct=0, sc=8) 00:15:49.700 starting I/O failed: -6 00:15:49.700 Write completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 starting I/O failed: -6 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Write completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 starting I/O failed: -6 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 starting I/O failed: -6 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 starting I/O failed: -6 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Write completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 starting I/O failed: -6 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 starting I/O failed: -6 00:15:49.700 Write completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 starting I/O failed: -6 00:15:49.700 Write completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 [2024-05-15 00:31:15.861787] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030000 is same with the state(5) to be set 00:15:49.700 Write completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Write completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Write completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Write completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.700 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 starting I/O failed: -6 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 [2024-05-15 00:31:15.862810] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025600 is same with the state(5) to be set 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Write completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 Read completed with error (sct=0, sc=8) 00:15:49.701 [2024-05-15 00:31:15.863155] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030280 is same with the state(5) to be set 00:15:51.081 [2024-05-15 00:31:16.815824] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000024c00 is same with the state(5) to be set 00:15:51.081 Read completed with error (sct=0, sc=8) 00:15:51.081 Read completed with error (sct=0, sc=8) 00:15:51.081 Read completed with error (sct=0, sc=8) 00:15:51.081 Read completed with error (sct=0, sc=8) 00:15:51.081 Write completed with error (sct=0, sc=8) 00:15:51.081 Read completed with error (sct=0, sc=8) 00:15:51.081 Read completed with error (sct=0, sc=8) 00:15:51.081 Read completed with error (sct=0, sc=8) 00:15:51.081 Read completed with error (sct=0, sc=8) 00:15:51.081 Write completed with error (sct=0, sc=8) 00:15:51.081 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 [2024-05-15 00:31:16.856782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025100 is same with the state(5) to be set 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 [2024-05-15 00:31:16.857120] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025880 is same with the state(5) to be set 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 [2024-05-15 00:31:16.858799] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000025380 is same with the state(5) to be set 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Write completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 Read completed with error (sct=0, sc=8) 00:15:51.082 [2024-05-15 00:31:16.863813] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x615000030500 is same with the state(5) to be set 00:15:51.082 Initializing NVMe Controllers 00:15:51.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:51.082 Controller IO queue size 128, less than required. 00:15:51.082 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:51.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:51.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:51.082 Initialization complete. Launching workers. 00:15:51.082 ======================================================== 00:15:51.082 Latency(us) 00:15:51.082 Device Information : IOPS MiB/s Average min max 00:15:51.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 193.80 0.09 945366.56 7019.17 1015494.85 00:15:51.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 157.61 0.08 868559.36 456.35 1013394.11 00:15:51.082 ======================================================== 00:15:51.082 Total : 351.41 0.17 910917.07 456.35 1015494.85 00:15:51.082 00:15:51.082 [2024-05-15 00:31:16.864702] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000024c00 (9): Bad file descriptor 00:15:51.082 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:51.082 00:31:16 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.082 00:31:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:15:51.082 00:31:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1944633 00:15:51.082 00:31:16 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1944633 00:15:51.341 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1944633) - No such process 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1944633 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@649 -- # local es=0 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # valid_exec_arg wait 1944633 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@637 -- # local arg=wait 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # type -t wait 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # wait 1944633 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # es=1 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:51.341 [2024-05-15 00:31:17.389685] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@560 -- # xtrace_disable 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1945251 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1945251 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:51.341 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:51.341 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.341 [2024-05-15 00:31:17.487682] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:51.910 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:51.910 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1945251 00:15:51.910 00:31:17 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:52.478 00:31:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:52.478 00:31:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1945251 00:15:52.478 00:31:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:53.042 00:31:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:53.042 00:31:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1945251 00:15:53.042 00:31:18 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:53.302 00:31:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:53.302 00:31:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1945251 00:15:53.302 00:31:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:53.873 00:31:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:53.873 00:31:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1945251 00:15:53.873 00:31:19 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:54.440 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:54.440 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1945251 00:15:54.440 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:54.698 Initializing NVMe Controllers 00:15:54.698 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:54.698 Controller IO queue size 128, less than required. 00:15:54.698 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:54.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:54.698 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:54.698 Initialization complete. Launching workers. 00:15:54.698 ======================================================== 00:15:54.698 Latency(us) 00:15:54.698 Device Information : IOPS MiB/s Average min max 00:15:54.698 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003334.87 1000132.03 1010384.11 00:15:54.698 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004078.65 1000128.83 1041009.53 00:15:54.698 ======================================================== 00:15:54.698 Total : 256.00 0.12 1003706.76 1000128.83 1041009.53 00:15:54.698 00:15:54.957 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:54.957 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1945251 00:15:54.957 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1945251) - No such process 00:15:54.957 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1945251 00:15:54.957 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:54.957 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:54.957 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:54.957 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:15:54.957 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:54.957 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:15:54.957 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:54.957 00:31:20 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:54.957 rmmod nvme_tcp 00:15:54.957 rmmod nvme_fabrics 00:15:54.957 rmmod nvme_keyring 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1944327 ']' 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1944327 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@947 -- # '[' -z 1944327 ']' 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@951 -- # kill -0 1944327 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # uname 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1944327 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1944327' 00:15:54.957 killing process with pid 1944327 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # kill 1944327 00:15:54.957 [2024-05-15 00:31:21.073381] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:15:54.957 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # wait 1944327 00:15:55.528 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:55.528 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:55.528 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:55.528 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:55.528 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:55.528 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.528 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:55.528 00:31:21 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.060 00:31:23 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:58.060 00:15:58.060 real 0m17.324s 00:15:58.060 user 0m31.310s 00:15:58.060 sys 0m5.507s 00:15:58.060 00:31:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # xtrace_disable 00:15:58.060 00:31:23 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:15:58.060 ************************************ 00:15:58.060 END TEST nvmf_delete_subsystem 00:15:58.060 ************************************ 00:15:58.060 00:31:23 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:58.060 00:31:23 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:15:58.060 00:31:23 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:15:58.060 00:31:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:58.060 ************************************ 00:15:58.060 START TEST nvmf_ns_masking 00:15:58.060 ************************************ 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1122 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:58.060 * Looking for test storage... 00:15:58.060 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # loops=5 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # uuidgen 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@15 -- # HOSTID=8a9f876e-3e22-49d5-b6f1-80970b645928 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvmftestinit 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:15:58.060 00:31:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:03.352 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:03.352 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:03.352 Found net devices under 0000:27:00.0: cvl_0_0 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:03.352 Found net devices under 0000:27:00.1: cvl_0_1 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:03.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:16:03.352 00:16:03.352 --- 10.0.0.2 ping statistics --- 00:16:03.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.352 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:03.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:16:03.352 00:16:03.352 --- 10.0.0.1 ping statistics --- 00:16:03.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.352 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1950023 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1950023 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@828 -- # '[' -z 1950023 ']' 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:03.352 00:31:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.352 [2024-05-15 00:31:29.041607] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:16:03.352 [2024-05-15 00:31:29.041705] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.352 EAL: No free 2048 kB hugepages reported on node 1 00:16:03.352 [2024-05-15 00:31:29.162309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.352 [2024-05-15 00:31:29.262798] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.352 [2024-05-15 00:31:29.262832] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.352 [2024-05-15 00:31:29.262842] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.352 [2024-05-15 00:31:29.262851] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.352 [2024-05-15 00:31:29.262859] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.352 [2024-05-15 00:31:29.262965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.352 [2024-05-15 00:31:29.262997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.352 [2024-05-15 00:31:29.262973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.352 [2024-05-15 00:31:29.263008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.612 00:31:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:03.612 00:31:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@861 -- # return 0 00:16:03.612 00:31:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:03.612 00:31:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:03.612 00:31:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:03.872 00:31:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.872 00:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:03.872 [2024-05-15 00:31:29.939253] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.872 00:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:16:03.872 00:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:16:03.872 00:31:29 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:04.130 Malloc1 00:16:04.130 00:31:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:04.388 Malloc2 00:16:04.388 00:31:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:04.388 00:31:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:16:04.648 00:31:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:04.648 [2024-05-15 00:31:30.748628] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:04.648 [2024-05-15 00:31:30.748934] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:04.648 00:31:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@61 -- # connect 00:16:04.648 00:31:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8a9f876e-3e22-49d5-b6f1-80970b645928 -a 10.0.0.2 -s 4420 -i 4 00:16:04.909 00:31:30 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:16:04.909 00:31:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:16:04.909 00:31:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.909 00:31:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:16:04.909 00:31:30 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:16:07.453 00:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:16:07.453 00:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:16:07.453 00:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:16:07.453 00:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:16:07.453 00:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.453 00:31:32 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:16:07.453 00:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:07.453 00:31:32 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:07.453 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:07.454 [ 0]:0x1 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56fcc3972097403bbc7de1a008aa14f3 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56fcc3972097403bbc7de1a008aa14f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:07.454 [ 0]:0x1 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56fcc3972097403bbc7de1a008aa14f3 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56fcc3972097403bbc7de1a008aa14f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:07.454 [ 1]:0x2 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7bf4acf0f764428fb7d2481956287e80 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7bf4acf0f764428fb7d2481956287e80 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@69 -- # disconnect 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:07.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.454 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:07.714 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:07.714 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@77 -- # connect 1 00:16:07.714 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8a9f876e-3e22-49d5-b6f1-80970b645928 -a 10.0.0.2 -s 4420 -i 4 00:16:07.973 00:31:33 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:07.973 00:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:16:07.973 00:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.973 00:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 1 ]] 00:16:07.973 00:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=1 00:16:07.973 00:31:33 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:16:09.878 00:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:16:09.878 00:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:16:09.878 00:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:16:09.878 00:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:16:09.878 00:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:16:09.878 00:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:16:09.878 00:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:09.878 00:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:09.878 00:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:09.878 00:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:09.878 00:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:16:09.878 00:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:16:09.879 00:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:16:09.879 00:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:16:09.879 00:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:09.879 00:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:16:09.879 00:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:09.879 00:31:35 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:16:09.879 00:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:09.879 00:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:09.879 00:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:09.879 00:31:35 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:09.879 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:09.879 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:09.879 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:16:09.879 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:09.879 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:09.879 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:09.879 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:16:09.879 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:09.879 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:09.879 [ 0]:0x2 00:16:09.879 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:09.879 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:10.137 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7bf4acf0f764428fb7d2481956287e80 00:16:10.137 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7bf4acf0f764428fb7d2481956287e80 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.138 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:10.138 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:16:10.138 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:10.138 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:10.138 [ 0]:0x1 00:16:10.138 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:10.138 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:10.138 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56fcc3972097403bbc7de1a008aa14f3 00:16:10.138 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56fcc3972097403bbc7de1a008aa14f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.138 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:16:10.138 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:10.138 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:10.138 [ 1]:0x2 00:16:10.138 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:10.138 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7bf4acf0f764428fb7d2481956287e80 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7bf4acf0f764428fb7d2481956287e80 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:16:10.397 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:10.398 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:10.398 00:31:36 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:10.398 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:16:10.398 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:10.398 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:10.656 [ 0]:0x2 00:16:10.656 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:10.656 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:10.656 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7bf4acf0f764428fb7d2481956287e80 00:16:10.656 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7bf4acf0f764428fb7d2481956287e80 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:10.656 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@91 -- # disconnect 00:16:10.656 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:10.656 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.656 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:10.914 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # connect 2 00:16:10.914 00:31:36 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 8a9f876e-3e22-49d5-b6f1-80970b645928 -a 10.0.0.2 -s 4420 -i 4 00:16:10.914 00:31:37 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:10.914 00:31:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1195 -- # local i=0 00:16:10.914 00:31:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:16:10.914 00:31:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1197 -- # [[ -n 2 ]] 00:16:10.914 00:31:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # nvme_device_counter=2 00:16:10.914 00:31:37 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # sleep 2 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # nvme_devices=2 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # return 0 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:13.468 [ 0]:0x1 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=56fcc3972097403bbc7de1a008aa14f3 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 56fcc3972097403bbc7de1a008aa14f3 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:13.468 [ 1]:0x2 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7bf4acf0f764428fb7d2481956287e80 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7bf4acf0f764428fb7d2481956287e80 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:16:13.468 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:13.469 [ 0]:0x2 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7bf4acf0f764428fb7d2481956287e80 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7bf4acf0f764428fb7d2481956287e80 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:13.469 [2024-05-15 00:31:39.558098] nvmf_rpc.c:1781:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:13.469 request: 00:16:13.469 { 00:16:13.469 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:13.469 "nsid": 2, 00:16:13.469 "host": "nqn.2016-06.io.spdk:host1", 00:16:13.469 "method": "nvmf_ns_remove_host", 00:16:13.469 "req_id": 1 00:16:13.469 } 00:16:13.469 Got JSON-RPC error response 00:16:13.469 response: 00:16:13.469 { 00:16:13.469 "code": -32602, 00:16:13.469 "message": "Invalid parameters" 00:16:13.469 } 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@649 -- # local es=0 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # valid_exec_arg ns_is_visible 0x1 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@637 -- # local arg=ns_is_visible 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # type -t ns_is_visible 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # ns_is_visible 0x1 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x1 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:13.469 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@652 -- # es=1 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@39 -- # grep 0x2 00:16:13.787 [ 0]:0x2 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # jq -r .nguid 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@40 -- # nguid=7bf4acf0f764428fb7d2481956287e80 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@41 -- # [[ 7bf4acf0f764428fb7d2481956287e80 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # disconnect 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:13.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.787 00:31:39 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # nvmftestfini 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:14.057 rmmod nvme_tcp 00:16:14.057 rmmod nvme_fabrics 00:16:14.057 rmmod nvme_keyring 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1950023 ']' 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1950023 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@947 -- # '[' -z 1950023 ']' 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@951 -- # kill -0 1950023 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # uname 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1950023 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1950023' 00:16:14.057 killing process with pid 1950023 00:16:14.057 00:31:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # kill 1950023 00:16:14.058 [2024-05-15 00:31:40.158255] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:14.058 00:31:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@971 -- # wait 1950023 00:16:14.623 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:14.623 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:14.623 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:14.623 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:14.623 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:14.623 00:31:40 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.623 00:31:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.623 00:31:40 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.157 00:31:42 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:17.157 00:16:17.157 real 0m19.142s 00:16:17.157 user 0m49.123s 00:16:17.157 sys 0m5.238s 00:16:17.157 00:31:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:17.157 00:31:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:17.157 ************************************ 00:16:17.157 END TEST nvmf_ns_masking 00:16:17.157 ************************************ 00:16:17.157 00:31:42 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:16:17.157 00:31:42 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:16:17.157 00:31:42 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:17.157 00:31:42 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:17.157 00:31:42 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:17.157 00:31:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:17.157 ************************************ 00:16:17.157 START TEST nvmf_host_management 00:16:17.157 ************************************ 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:17.157 * Looking for test storage... 00:16:17.157 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.157 00:31:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:17.158 00:31:42 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:23.726 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:23.726 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:23.726 Found net devices under 0000:27:00.0: cvl_0_0 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:23.726 Found net devices under 0000:27:00.1: cvl_0_1 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:23.726 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:23.727 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.727 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:16:23.727 00:16:23.727 --- 10.0.0.2 ping statistics --- 00:16:23.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.727 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.727 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.727 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:16:23.727 00:16:23.727 --- 10.0.0.1 ping statistics --- 00:16:23.727 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.727 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1956279 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1956279 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 1956279 ']' 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:23.727 00:31:48 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.727 [2024-05-15 00:31:48.969321] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:16:23.727 [2024-05-15 00:31:48.969449] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.727 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.727 [2024-05-15 00:31:49.109273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.727 [2024-05-15 00:31:49.215392] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.727 [2024-05-15 00:31:49.215445] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.727 [2024-05-15 00:31:49.215456] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.727 [2024-05-15 00:31:49.215466] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.727 [2024-05-15 00:31:49.215474] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.727 [2024-05-15 00:31:49.215631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.727 [2024-05-15 00:31:49.215743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.727 [2024-05-15 00:31:49.215868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.727 [2024-05-15 00:31:49.215897] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.727 [2024-05-15 00:31:49.718472] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.727 Malloc0 00:16:23.727 [2024-05-15 00:31:49.794636] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:23.727 [2024-05-15 00:31:49.794976] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1956512 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1956512 /var/tmp/bdevperf.sock 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@828 -- # '[' -z 1956512 ']' 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:23.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:23.727 { 00:16:23.727 "params": { 00:16:23.727 "name": "Nvme$subsystem", 00:16:23.727 "trtype": "$TEST_TRANSPORT", 00:16:23.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:23.727 "adrfam": "ipv4", 00:16:23.727 "trsvcid": "$NVMF_PORT", 00:16:23.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:23.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:23.727 "hdgst": ${hdgst:-false}, 00:16:23.727 "ddgst": ${ddgst:-false} 00:16:23.727 }, 00:16:23.727 "method": "bdev_nvme_attach_controller" 00:16:23.727 } 00:16:23.727 EOF 00:16:23.727 )") 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:23.727 00:31:49 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:23.727 "params": { 00:16:23.727 "name": "Nvme0", 00:16:23.727 "trtype": "tcp", 00:16:23.727 "traddr": "10.0.0.2", 00:16:23.727 "adrfam": "ipv4", 00:16:23.727 "trsvcid": "4420", 00:16:23.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:23.727 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:23.727 "hdgst": false, 00:16:23.727 "ddgst": false 00:16:23.727 }, 00:16:23.727 "method": "bdev_nvme_attach_controller" 00:16:23.727 }' 00:16:23.987 [2024-05-15 00:31:49.929798] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:16:23.987 [2024-05-15 00:31:49.929940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1956512 ] 00:16:23.987 EAL: No free 2048 kB hugepages reported on node 1 00:16:23.987 [2024-05-15 00:31:50.065482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.245 [2024-05-15 00:31:50.164998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.502 Running I/O for 10 seconds... 00:16:24.502 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:24.502 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@861 -- # return 0 00:16:24.503 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:24.503 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:24.503 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:24.762 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:24.762 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=195 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 195 -ge 100 ']' 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:24.763 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:24.763 [2024-05-15 00:31:50.713793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.713856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.713893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.713903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.713919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.713927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.713938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.713946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.713956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.713964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.713974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.713981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.713992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.763 [2024-05-15 00:31:50.714637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.763 [2024-05-15 00:31:50.714644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.714988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.714995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.715005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:24.764 [2024-05-15 00:31:50.715013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:24.764 [2024-05-15 00:31:50.715022] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a1900 is same with the state(5) to be set 00:16:24.764 [2024-05-15 00:31:50.715157] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003a1900 was disconnected and freed. reset controller. 00:16:24.764 [2024-05-15 00:31:50.716081] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:24.764 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:24.764 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:24.764 task offset: 40832 on job bdev=Nvme0n1 fails 00:16:24.764 00:16:24.764 Latency(us) 00:16:24.764 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:24.764 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:24.764 Job: Nvme0n1 ended in about 0.16 seconds with error 00:16:24.764 Verification LBA range: start 0x0 length 0x400 00:16:24.764 Nvme0n1 : 0.16 1570.05 98.13 392.51 0.00 31080.50 6726.06 30215.55 00:16:24.764 =================================================================================================================== 00:16:24.764 Total : 1570.05 98.13 392.51 0.00 31080.50 6726.06 30215.55 00:16:24.764 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@560 -- # xtrace_disable 00:16:24.764 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:24.764 [2024-05-15 00:31:50.718465] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:24.764 [2024-05-15 00:31:50.718501] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1180 (9): Bad file descriptor 00:16:24.764 00:31:50 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:16:24.764 00:31:50 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:24.764 [2024-05-15 00:31:50.727693] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:25.701 00:31:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1956512 00:16:25.701 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1956512) - No such process 00:16:25.701 00:31:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:25.701 00:31:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:25.701 00:31:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:25.701 00:31:51 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:25.701 00:31:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:25.701 00:31:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:25.701 00:31:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:25.701 00:31:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:25.701 { 00:16:25.701 "params": { 00:16:25.701 "name": "Nvme$subsystem", 00:16:25.701 "trtype": "$TEST_TRANSPORT", 00:16:25.701 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:25.701 "adrfam": "ipv4", 00:16:25.701 "trsvcid": "$NVMF_PORT", 00:16:25.701 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:25.701 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:25.701 "hdgst": ${hdgst:-false}, 00:16:25.701 "ddgst": ${ddgst:-false} 00:16:25.701 }, 00:16:25.701 "method": "bdev_nvme_attach_controller" 00:16:25.701 } 00:16:25.701 EOF 00:16:25.701 )") 00:16:25.701 00:31:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:25.701 00:31:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:25.701 00:31:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:25.701 00:31:51 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:25.701 "params": { 00:16:25.701 "name": "Nvme0", 00:16:25.701 "trtype": "tcp", 00:16:25.701 "traddr": "10.0.0.2", 00:16:25.701 "adrfam": "ipv4", 00:16:25.701 "trsvcid": "4420", 00:16:25.701 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:25.701 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:25.701 "hdgst": false, 00:16:25.701 "ddgst": false 00:16:25.701 }, 00:16:25.701 "method": "bdev_nvme_attach_controller" 00:16:25.701 }' 00:16:25.701 [2024-05-15 00:31:51.816690] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:16:25.701 [2024-05-15 00:31:51.816838] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1956846 ] 00:16:25.960 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.960 [2024-05-15 00:31:51.948481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.960 [2024-05-15 00:31:52.047043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.218 Running I/O for 1 seconds... 00:16:27.594 00:16:27.594 Latency(us) 00:16:27.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.594 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:27.594 Verification LBA range: start 0x0 length 0x400 00:16:27.594 Nvme0n1 : 1.06 2579.14 161.20 0.00 0.00 23442.26 2397.24 44702.45 00:16:27.594 =================================================================================================================== 00:16:27.594 Total : 2579.14 161.20 0.00 0.00 23442.26 2397.24 44702.45 00:16:27.594 00:31:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:27.594 00:31:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:27.594 00:31:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:27.594 00:31:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:27.594 00:31:53 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:27.594 00:31:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:27.594 00:31:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:27.594 00:31:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:27.594 00:31:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:27.594 00:31:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:27.594 00:31:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:27.594 rmmod nvme_tcp 00:16:27.594 rmmod nvme_fabrics 00:16:27.852 rmmod nvme_keyring 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1956279 ']' 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1956279 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@947 -- # '[' -z 1956279 ']' 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@951 -- # kill -0 1956279 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # uname 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1956279 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1956279' 00:16:27.852 killing process with pid 1956279 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # kill 1956279 00:16:27.852 [2024-05-15 00:31:53.848486] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:27.852 00:31:53 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@971 -- # wait 1956279 00:16:28.418 [2024-05-15 00:31:54.304406] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:28.418 00:31:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:28.418 00:31:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:28.418 00:31:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:28.418 00:31:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:28.418 00:31:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:28.418 00:31:54 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.418 00:31:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.418 00:31:54 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.322 00:31:56 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:30.322 00:31:56 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:30.322 00:16:30.322 real 0m13.529s 00:16:30.322 user 0m24.341s 00:16:30.322 sys 0m5.715s 00:16:30.322 00:31:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:30.322 00:31:56 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:30.322 ************************************ 00:16:30.322 END TEST nvmf_host_management 00:16:30.322 ************************************ 00:16:30.322 00:31:56 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:30.322 00:31:56 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:30.322 00:31:56 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:30.322 00:31:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:30.322 ************************************ 00:16:30.322 START TEST nvmf_lvol 00:16:30.322 ************************************ 00:16:30.322 00:31:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:30.581 * Looking for test storage... 00:16:30.581 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:30.581 00:31:56 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:37.157 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:37.158 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:37.158 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:37.158 Found net devices under 0000:27:00.0: cvl_0_0 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:37.158 Found net devices under 0000:27:00.1: cvl_0_1 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:37.158 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:37.158 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:16:37.158 00:16:37.158 --- 10.0.0.2 ping statistics --- 00:16:37.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.158 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:37.158 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:37.158 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:16:37.158 00:16:37.158 --- 10.0.0.1 ping statistics --- 00:16:37.158 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:37.158 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1961322 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1961322 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@828 -- # '[' -z 1961322 ']' 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:37.158 00:32:02 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:37.158 [2024-05-15 00:32:02.690572] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:16:37.158 [2024-05-15 00:32:02.690702] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:37.158 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.158 [2024-05-15 00:32:02.828934] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:37.158 [2024-05-15 00:32:02.928062] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:37.158 [2024-05-15 00:32:02.928114] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:37.158 [2024-05-15 00:32:02.928124] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:37.158 [2024-05-15 00:32:02.928134] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:37.158 [2024-05-15 00:32:02.928141] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:37.158 [2024-05-15 00:32:02.928243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.158 [2024-05-15 00:32:02.928328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.158 [2024-05-15 00:32:02.928336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.427 00:32:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:37.427 00:32:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@861 -- # return 0 00:16:37.427 00:32:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:37.427 00:32:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:37.427 00:32:03 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:37.428 00:32:03 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:37.428 00:32:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:37.428 [2024-05-15 00:32:03.574964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.693 00:32:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:37.693 00:32:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:37.693 00:32:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:37.953 00:32:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:37.953 00:32:03 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:38.211 00:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:38.211 00:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=35fabd79-ccc1-4c6b-be2f-2077910ef27d 00:16:38.211 00:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 35fabd79-ccc1-4c6b-be2f-2077910ef27d lvol 20 00:16:38.469 00:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=310518b1-37ce-47bd-b29c-05f490cbe2b3 00:16:38.469 00:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:38.469 00:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 310518b1-37ce-47bd-b29c-05f490cbe2b3 00:16:38.729 00:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:38.729 [2024-05-15 00:32:04.816643] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:16:38.729 [2024-05-15 00:32:04.816951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.729 00:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:38.990 00:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1961934 00:16:38.990 00:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:38.990 00:32:04 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:38.990 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.926 00:32:05 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 310518b1-37ce-47bd-b29c-05f490cbe2b3 MY_SNAPSHOT 00:16:40.184 00:32:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=13f93bee-d1f8-4ac8-9558-fbab9634e7da 00:16:40.184 00:32:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 310518b1-37ce-47bd-b29c-05f490cbe2b3 30 00:16:40.443 00:32:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 13f93bee-d1f8-4ac8-9558-fbab9634e7da MY_CLONE 00:16:40.443 00:32:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=8e9e8abb-5ab4-4836-a74d-69590500f282 00:16:40.443 00:32:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 8e9e8abb-5ab4-4836-a74d-69590500f282 00:16:41.011 00:32:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1961934 00:16:51.045 Initializing NVMe Controllers 00:16:51.045 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:51.045 Controller IO queue size 128, less than required. 00:16:51.045 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:51.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:51.045 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:51.045 Initialization complete. Launching workers. 00:16:51.045 ======================================================== 00:16:51.045 Latency(us) 00:16:51.045 Device Information : IOPS MiB/s Average min max 00:16:51.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14402.10 56.26 8891.42 1184.53 56378.53 00:16:51.045 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 14201.90 55.48 9012.40 2747.23 69887.07 00:16:51.045 ======================================================== 00:16:51.045 Total : 28604.00 111.73 8951.49 1184.53 69887.07 00:16:51.045 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 310518b1-37ce-47bd-b29c-05f490cbe2b3 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 35fabd79-ccc1-4c6b-be2f-2077910ef27d 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:51.045 rmmod nvme_tcp 00:16:51.045 rmmod nvme_fabrics 00:16:51.045 rmmod nvme_keyring 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1961322 ']' 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1961322 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@947 -- # '[' -z 1961322 ']' 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@951 -- # kill -0 1961322 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # uname 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1961322 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:16:51.045 00:32:15 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:16:51.045 00:32:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1961322' 00:16:51.045 killing process with pid 1961322 00:16:51.045 00:32:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # kill 1961322 00:16:51.045 [2024-05-15 00:32:16.001036] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:16:51.045 00:32:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@971 -- # wait 1961322 00:16:51.045 00:32:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:51.045 00:32:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:51.045 00:32:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:51.045 00:32:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:51.045 00:32:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:51.045 00:32:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.045 00:32:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.045 00:32:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:52.953 00:16:52.953 real 0m22.163s 00:16:52.953 user 1m3.182s 00:16:52.953 sys 0m6.946s 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # xtrace_disable 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:52.953 ************************************ 00:16:52.953 END TEST nvmf_lvol 00:16:52.953 ************************************ 00:16:52.953 00:32:18 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:52.953 00:32:18 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:16:52.953 00:32:18 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:52.953 00:32:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:52.953 ************************************ 00:16:52.953 START TEST nvmf_lvs_grow 00:16:52.953 ************************************ 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:52.953 * Looking for test storage... 00:16:52.953 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:52.953 00:32:18 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:52.954 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:52.954 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:52.954 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:52.954 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:52.954 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:52.954 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:52.954 00:32:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:52.954 00:32:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:52.954 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:16:52.954 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:52.954 00:32:18 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:52.954 00:32:18 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:58.226 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:58.226 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:58.226 Found net devices under 0000:27:00.0: cvl_0_0 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:58.226 Found net devices under 0000:27:00.1: cvl_0_1 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.226 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:58.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:16:58.226 00:16:58.226 --- 10.0.0.2 ping statistics --- 00:16:58.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.227 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.227 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.227 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:16:58.227 00:16:58.227 --- 10.0.0.1 ping statistics --- 00:16:58.227 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.227 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@721 -- # xtrace_disable 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1968005 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1968005 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@828 -- # '[' -z 1968005 ']' 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local max_retries=100 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # xtrace_disable 00:16:58.227 00:32:24 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:58.485 [2024-05-15 00:32:24.395212] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:16:58.485 [2024-05-15 00:32:24.395287] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.485 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.485 [2024-05-15 00:32:24.487316] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.485 [2024-05-15 00:32:24.593478] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.485 [2024-05-15 00:32:24.593521] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.485 [2024-05-15 00:32:24.593532] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.485 [2024-05-15 00:32:24.593544] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.485 [2024-05-15 00:32:24.593558] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.485 [2024-05-15 00:32:24.593594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.053 00:32:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:16:59.053 00:32:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@861 -- # return 0 00:16:59.053 00:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:59.053 00:32:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@727 -- # xtrace_disable 00:16:59.053 00:32:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:59.053 00:32:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:59.053 00:32:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:59.314 [2024-05-15 00:32:25.273027] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:59.314 ************************************ 00:16:59.314 START TEST lvs_grow_clean 00:16:59.314 ************************************ 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1122 -- # lvs_grow 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:59.314 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:59.574 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:59.574 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:59.574 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=e6717c8b-0263-486a-90a7-453fde8d5396 00:16:59.574 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:59.574 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6717c8b-0263-486a-90a7-453fde8d5396 00:16:59.832 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:59.832 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:59.832 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e6717c8b-0263-486a-90a7-453fde8d5396 lvol 150 00:16:59.832 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a2394c30-7fa7-4850-b44b-2476f53f6091 00:16:59.832 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:59.832 00:32:25 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:00.090 [2024-05-15 00:32:26.083391] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:00.090 [2024-05-15 00:32:26.083461] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:00.090 true 00:17:00.090 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6717c8b-0263-486a-90a7-453fde8d5396 00:17:00.090 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:00.090 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:00.091 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:00.348 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a2394c30-7fa7-4850-b44b-2476f53f6091 00:17:00.608 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:00.608 [2024-05-15 00:32:26.627574] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:00.608 [2024-05-15 00:32:26.627838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.608 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:00.867 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1968550 00:17:00.867 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:00.867 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1968550 /var/tmp/bdevperf.sock 00:17:00.867 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:00.867 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@828 -- # '[' -z 1968550 ']' 00:17:00.867 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.867 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:00.867 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.867 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:00.867 00:32:26 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:00.867 [2024-05-15 00:32:26.844961] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:17:00.867 [2024-05-15 00:32:26.845074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968550 ] 00:17:00.867 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.867 [2024-05-15 00:32:26.983734] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.127 [2024-05-15 00:32:27.150195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.692 00:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:01.693 00:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@861 -- # return 0 00:17:01.693 00:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:01.950 Nvme0n1 00:17:01.950 00:32:27 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:01.950 [ 00:17:01.950 { 00:17:01.950 "name": "Nvme0n1", 00:17:01.950 "aliases": [ 00:17:01.950 "a2394c30-7fa7-4850-b44b-2476f53f6091" 00:17:01.950 ], 00:17:01.950 "product_name": "NVMe disk", 00:17:01.950 "block_size": 4096, 00:17:01.950 "num_blocks": 38912, 00:17:01.950 "uuid": "a2394c30-7fa7-4850-b44b-2476f53f6091", 00:17:01.951 "assigned_rate_limits": { 00:17:01.951 "rw_ios_per_sec": 0, 00:17:01.951 "rw_mbytes_per_sec": 0, 00:17:01.951 "r_mbytes_per_sec": 0, 00:17:01.951 "w_mbytes_per_sec": 0 00:17:01.951 }, 00:17:01.951 "claimed": false, 00:17:01.951 "zoned": false, 00:17:01.951 "supported_io_types": { 00:17:01.951 "read": true, 00:17:01.951 "write": true, 00:17:01.951 "unmap": true, 00:17:01.951 "write_zeroes": true, 00:17:01.951 "flush": true, 00:17:01.951 "reset": true, 00:17:01.951 "compare": true, 00:17:01.951 "compare_and_write": true, 00:17:01.951 "abort": true, 00:17:01.951 "nvme_admin": true, 00:17:01.951 "nvme_io": true 00:17:01.951 }, 00:17:01.951 "memory_domains": [ 00:17:01.951 { 00:17:01.951 "dma_device_id": "system", 00:17:01.951 "dma_device_type": 1 00:17:01.951 } 00:17:01.951 ], 00:17:01.951 "driver_specific": { 00:17:01.951 "nvme": [ 00:17:01.951 { 00:17:01.951 "trid": { 00:17:01.951 "trtype": "TCP", 00:17:01.951 "adrfam": "IPv4", 00:17:01.951 "traddr": "10.0.0.2", 00:17:01.951 "trsvcid": "4420", 00:17:01.951 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:01.951 }, 00:17:01.951 "ctrlr_data": { 00:17:01.951 "cntlid": 1, 00:17:01.951 "vendor_id": "0x8086", 00:17:01.951 "model_number": "SPDK bdev Controller", 00:17:01.951 "serial_number": "SPDK0", 00:17:01.951 "firmware_revision": "24.05", 00:17:01.951 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:01.951 "oacs": { 00:17:01.951 "security": 0, 00:17:01.951 "format": 0, 00:17:01.951 "firmware": 0, 00:17:01.951 "ns_manage": 0 00:17:01.951 }, 00:17:01.951 "multi_ctrlr": true, 00:17:01.951 "ana_reporting": false 00:17:01.951 }, 00:17:01.951 "vs": { 00:17:01.951 "nvme_version": "1.3" 00:17:01.951 }, 00:17:01.951 "ns_data": { 00:17:01.951 "id": 1, 00:17:01.951 "can_share": true 00:17:01.951 } 00:17:01.951 } 00:17:01.951 ], 00:17:01.951 "mp_policy": "active_passive" 00:17:01.951 } 00:17:01.951 } 00:17:01.951 ] 00:17:01.951 00:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1968855 00:17:01.951 00:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:01.951 00:32:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:02.209 Running I/O for 10 seconds... 00:17:03.145 Latency(us) 00:17:03.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.145 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.145 Nvme0n1 : 1.00 22954.00 89.66 0.00 0.00 0.00 0.00 0.00 00:17:03.145 =================================================================================================================== 00:17:03.145 Total : 22954.00 89.66 0.00 0.00 0.00 0.00 0.00 00:17:03.145 00:17:04.079 00:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e6717c8b-0263-486a-90a7-453fde8d5396 00:17:04.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.079 Nvme0n1 : 2.00 22781.00 88.99 0.00 0.00 0.00 0.00 0.00 00:17:04.079 =================================================================================================================== 00:17:04.079 Total : 22781.00 88.99 0.00 0.00 0.00 0.00 0.00 00:17:04.079 00:17:04.079 true 00:17:04.079 00:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6717c8b-0263-486a-90a7-453fde8d5396 00:17:04.079 00:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:04.337 00:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:04.337 00:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:04.337 00:32:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1968855 00:17:04.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.990 Nvme0n1 : 3.00 22798.00 89.05 0.00 0.00 0.00 0.00 0.00 00:17:04.990 =================================================================================================================== 00:17:04.990 Total : 22798.00 89.05 0.00 0.00 0.00 0.00 0.00 00:17:04.990 00:17:06.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.363 Nvme0n1 : 4.00 22886.50 89.40 0.00 0.00 0.00 0.00 0.00 00:17:06.363 =================================================================================================================== 00:17:06.363 Total : 22886.50 89.40 0.00 0.00 0.00 0.00 0.00 00:17:06.363 00:17:07.297 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.297 Nvme0n1 : 5.00 22936.40 89.60 0.00 0.00 0.00 0.00 0.00 00:17:07.298 =================================================================================================================== 00:17:07.298 Total : 22936.40 89.60 0.00 0.00 0.00 0.00 0.00 00:17:07.298 00:17:08.232 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.232 Nvme0n1 : 6.00 22948.33 89.64 0.00 0.00 0.00 0.00 0.00 00:17:08.232 =================================================================================================================== 00:17:08.232 Total : 22948.33 89.64 0.00 0.00 0.00 0.00 0.00 00:17:08.232 00:17:09.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:09.168 Nvme0n1 : 7.00 22991.14 89.81 0.00 0.00 0.00 0.00 0.00 00:17:09.168 =================================================================================================================== 00:17:09.168 Total : 22991.14 89.81 0.00 0.00 0.00 0.00 0.00 00:17:09.168 00:17:10.105 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.105 Nvme0n1 : 8.00 23000.25 89.84 0.00 0.00 0.00 0.00 0.00 00:17:10.105 =================================================================================================================== 00:17:10.105 Total : 23000.25 89.84 0.00 0.00 0.00 0.00 0.00 00:17:10.105 00:17:11.041 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.041 Nvme0n1 : 9.00 23019.78 89.92 0.00 0.00 0.00 0.00 0.00 00:17:11.041 =================================================================================================================== 00:17:11.041 Total : 23019.78 89.92 0.00 0.00 0.00 0.00 0.00 00:17:11.041 00:17:11.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.976 Nvme0n1 : 10.00 23027.40 89.95 0.00 0.00 0.00 0.00 0.00 00:17:11.976 =================================================================================================================== 00:17:11.976 Total : 23027.40 89.95 0.00 0.00 0.00 0.00 0.00 00:17:11.976 00:17:11.976 00:17:11.976 Latency(us) 00:17:11.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:11.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.976 Nvme0n1 : 10.01 23027.32 89.95 0.00 0.00 5554.18 2966.37 12072.42 00:17:11.976 =================================================================================================================== 00:17:11.976 Total : 23027.32 89.95 0.00 0.00 5554.18 2966.37 12072.42 00:17:11.976 0 00:17:12.235 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1968550 00:17:12.235 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@947 -- # '[' -z 1968550 ']' 00:17:12.235 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@951 -- # kill -0 1968550 00:17:12.235 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # uname 00:17:12.235 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:12.235 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1968550 00:17:12.235 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:17:12.235 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:17:12.235 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1968550' 00:17:12.235 killing process with pid 1968550 00:17:12.235 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # kill 1968550 00:17:12.235 Received shutdown signal, test time was about 10.000000 seconds 00:17:12.235 00:17:12.235 Latency(us) 00:17:12.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.235 =================================================================================================================== 00:17:12.235 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.235 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # wait 1968550 00:17:12.494 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:12.752 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:12.752 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6717c8b-0263-486a-90a7-453fde8d5396 00:17:12.752 00:32:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:13.012 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:13.012 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:13.012 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:13.012 [2024-05-15 00:32:39.131145] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:13.012 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6717c8b-0263-486a-90a7-453fde8d5396 00:17:13.012 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@649 -- # local es=0 00:17:13.012 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6717c8b-0263-486a-90a7-453fde8d5396 00:17:13.013 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:13.013 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:13.013 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:13.013 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:13.013 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:13.013 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:13.013 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:13.013 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:17:13.271 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6717c8b-0263-486a-90a7-453fde8d5396 00:17:13.271 request: 00:17:13.271 { 00:17:13.271 "uuid": "e6717c8b-0263-486a-90a7-453fde8d5396", 00:17:13.271 "method": "bdev_lvol_get_lvstores", 00:17:13.271 "req_id": 1 00:17:13.271 } 00:17:13.271 Got JSON-RPC error response 00:17:13.271 response: 00:17:13.271 { 00:17:13.271 "code": -19, 00:17:13.271 "message": "No such device" 00:17:13.271 } 00:17:13.271 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # es=1 00:17:13.271 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:13.271 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:13.271 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:13.271 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:13.271 aio_bdev 00:17:13.529 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a2394c30-7fa7-4850-b44b-2476f53f6091 00:17:13.529 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@896 -- # local bdev_name=a2394c30-7fa7-4850-b44b-2476f53f6091 00:17:13.529 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:17:13.529 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local i 00:17:13.529 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:17:13.529 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:17:13.529 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:13.529 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a2394c30-7fa7-4850-b44b-2476f53f6091 -t 2000 00:17:13.787 [ 00:17:13.787 { 00:17:13.787 "name": "a2394c30-7fa7-4850-b44b-2476f53f6091", 00:17:13.787 "aliases": [ 00:17:13.787 "lvs/lvol" 00:17:13.787 ], 00:17:13.788 "product_name": "Logical Volume", 00:17:13.788 "block_size": 4096, 00:17:13.788 "num_blocks": 38912, 00:17:13.788 "uuid": "a2394c30-7fa7-4850-b44b-2476f53f6091", 00:17:13.788 "assigned_rate_limits": { 00:17:13.788 "rw_ios_per_sec": 0, 00:17:13.788 "rw_mbytes_per_sec": 0, 00:17:13.788 "r_mbytes_per_sec": 0, 00:17:13.788 "w_mbytes_per_sec": 0 00:17:13.788 }, 00:17:13.788 "claimed": false, 00:17:13.788 "zoned": false, 00:17:13.788 "supported_io_types": { 00:17:13.788 "read": true, 00:17:13.788 "write": true, 00:17:13.788 "unmap": true, 00:17:13.788 "write_zeroes": true, 00:17:13.788 "flush": false, 00:17:13.788 "reset": true, 00:17:13.788 "compare": false, 00:17:13.788 "compare_and_write": false, 00:17:13.788 "abort": false, 00:17:13.788 "nvme_admin": false, 00:17:13.788 "nvme_io": false 00:17:13.788 }, 00:17:13.788 "driver_specific": { 00:17:13.788 "lvol": { 00:17:13.788 "lvol_store_uuid": "e6717c8b-0263-486a-90a7-453fde8d5396", 00:17:13.788 "base_bdev": "aio_bdev", 00:17:13.788 "thin_provision": false, 00:17:13.788 "num_allocated_clusters": 38, 00:17:13.788 "snapshot": false, 00:17:13.788 "clone": false, 00:17:13.788 "esnap_clone": false 00:17:13.788 } 00:17:13.788 } 00:17:13.788 } 00:17:13.788 ] 00:17:13.788 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # return 0 00:17:13.788 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6717c8b-0263-486a-90a7-453fde8d5396 00:17:13.788 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:13.788 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:13.788 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e6717c8b-0263-486a-90a7-453fde8d5396 00:17:13.788 00:32:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:14.047 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:14.047 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a2394c30-7fa7-4850-b44b-2476f53f6091 00:17:14.047 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e6717c8b-0263-486a-90a7-453fde8d5396 00:17:14.306 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:14.566 00:17:14.566 real 0m15.178s 00:17:14.566 user 0m14.663s 00:17:14.566 sys 0m1.279s 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:14.566 ************************************ 00:17:14.566 END TEST lvs_grow_clean 00:17:14.566 ************************************ 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:14.566 ************************************ 00:17:14.566 START TEST lvs_grow_dirty 00:17:14.566 ************************************ 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1122 -- # lvs_grow dirty 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:14.566 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:14.825 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:14.825 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:14.825 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:14.825 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:14.825 00:32:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:15.084 00:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:15.084 00:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:15.084 00:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 lvol 150 00:17:15.084 00:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=09e4d744-78b7-4440-a211-9c236a258f3b 00:17:15.084 00:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:15.084 00:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:15.342 [2024-05-15 00:32:41.325358] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:15.342 [2024-05-15 00:32:41.325426] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:15.342 true 00:17:15.342 00:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:15.342 00:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:15.342 00:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:15.342 00:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:15.600 00:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 09e4d744-78b7-4440-a211-9c236a258f3b 00:17:15.600 00:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:15.858 [2024-05-15 00:32:41.873761] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.858 00:32:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:16.117 00:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1971595 00:17:16.117 00:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:16.117 00:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1971595 /var/tmp/bdevperf.sock 00:17:16.117 00:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:16.117 00:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 1971595 ']' 00:17:16.117 00:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:16.117 00:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:16.117 00:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:16.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:16.117 00:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:16.117 00:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:16.117 [2024-05-15 00:32:42.101362] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:17:16.117 [2024-05-15 00:32:42.101483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971595 ] 00:17:16.117 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.117 [2024-05-15 00:32:42.216922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:16.376 [2024-05-15 00:32:42.308449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.942 00:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:16.942 00:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:17:16.942 00:32:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:17.200 Nvme0n1 00:17:17.200 00:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:17.200 [ 00:17:17.200 { 00:17:17.200 "name": "Nvme0n1", 00:17:17.200 "aliases": [ 00:17:17.200 "09e4d744-78b7-4440-a211-9c236a258f3b" 00:17:17.200 ], 00:17:17.200 "product_name": "NVMe disk", 00:17:17.200 "block_size": 4096, 00:17:17.200 "num_blocks": 38912, 00:17:17.200 "uuid": "09e4d744-78b7-4440-a211-9c236a258f3b", 00:17:17.200 "assigned_rate_limits": { 00:17:17.200 "rw_ios_per_sec": 0, 00:17:17.200 "rw_mbytes_per_sec": 0, 00:17:17.200 "r_mbytes_per_sec": 0, 00:17:17.200 "w_mbytes_per_sec": 0 00:17:17.200 }, 00:17:17.200 "claimed": false, 00:17:17.200 "zoned": false, 00:17:17.200 "supported_io_types": { 00:17:17.200 "read": true, 00:17:17.200 "write": true, 00:17:17.200 "unmap": true, 00:17:17.200 "write_zeroes": true, 00:17:17.200 "flush": true, 00:17:17.200 "reset": true, 00:17:17.200 "compare": true, 00:17:17.200 "compare_and_write": true, 00:17:17.200 "abort": true, 00:17:17.200 "nvme_admin": true, 00:17:17.200 "nvme_io": true 00:17:17.200 }, 00:17:17.200 "memory_domains": [ 00:17:17.200 { 00:17:17.200 "dma_device_id": "system", 00:17:17.200 "dma_device_type": 1 00:17:17.200 } 00:17:17.200 ], 00:17:17.200 "driver_specific": { 00:17:17.200 "nvme": [ 00:17:17.200 { 00:17:17.200 "trid": { 00:17:17.200 "trtype": "TCP", 00:17:17.200 "adrfam": "IPv4", 00:17:17.200 "traddr": "10.0.0.2", 00:17:17.200 "trsvcid": "4420", 00:17:17.200 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:17.200 }, 00:17:17.200 "ctrlr_data": { 00:17:17.200 "cntlid": 1, 00:17:17.200 "vendor_id": "0x8086", 00:17:17.200 "model_number": "SPDK bdev Controller", 00:17:17.200 "serial_number": "SPDK0", 00:17:17.200 "firmware_revision": "24.05", 00:17:17.200 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:17.200 "oacs": { 00:17:17.200 "security": 0, 00:17:17.200 "format": 0, 00:17:17.200 "firmware": 0, 00:17:17.200 "ns_manage": 0 00:17:17.200 }, 00:17:17.200 "multi_ctrlr": true, 00:17:17.200 "ana_reporting": false 00:17:17.200 }, 00:17:17.200 "vs": { 00:17:17.200 "nvme_version": "1.3" 00:17:17.200 }, 00:17:17.200 "ns_data": { 00:17:17.200 "id": 1, 00:17:17.200 "can_share": true 00:17:17.200 } 00:17:17.200 } 00:17:17.200 ], 00:17:17.200 "mp_policy": "active_passive" 00:17:17.200 } 00:17:17.200 } 00:17:17.200 ] 00:17:17.200 00:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1971772 00:17:17.200 00:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:17.200 00:32:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:17.474 Running I/O for 10 seconds... 00:17:18.418 Latency(us) 00:17:18.418 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:18.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:18.418 Nvme0n1 : 1.00 23632.00 92.31 0.00 0.00 0.00 0.00 0.00 00:17:18.418 =================================================================================================================== 00:17:18.418 Total : 23632.00 92.31 0.00 0.00 0.00 0.00 0.00 00:17:18.418 00:17:19.353 00:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:19.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:19.353 Nvme0n1 : 2.00 23660.50 92.42 0.00 0.00 0.00 0.00 0.00 00:17:19.353 =================================================================================================================== 00:17:19.353 Total : 23660.50 92.42 0.00 0.00 0.00 0.00 0.00 00:17:19.353 00:17:19.353 true 00:17:19.353 00:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:19.353 00:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:19.612 00:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:19.612 00:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:19.612 00:32:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1971772 00:17:20.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:20.239 Nvme0n1 : 3.00 23380.67 91.33 0.00 0.00 0.00 0.00 0.00 00:17:20.239 =================================================================================================================== 00:17:20.239 Total : 23380.67 91.33 0.00 0.00 0.00 0.00 0.00 00:17:20.239 00:17:21.613 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:21.613 Nvme0n1 : 4.00 23313.50 91.07 0.00 0.00 0.00 0.00 0.00 00:17:21.613 =================================================================================================================== 00:17:21.613 Total : 23313.50 91.07 0.00 0.00 0.00 0.00 0.00 00:17:21.613 00:17:22.547 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:22.547 Nvme0n1 : 5.00 23258.80 90.85 0.00 0.00 0.00 0.00 0.00 00:17:22.547 =================================================================================================================== 00:17:22.547 Total : 23258.80 90.85 0.00 0.00 0.00 0.00 0.00 00:17:22.547 00:17:23.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.483 Nvme0n1 : 6.00 23219.67 90.70 0.00 0.00 0.00 0.00 0.00 00:17:23.483 =================================================================================================================== 00:17:23.483 Total : 23219.67 90.70 0.00 0.00 0.00 0.00 0.00 00:17:23.483 00:17:24.420 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:24.420 Nvme0n1 : 7.00 23155.14 90.45 0.00 0.00 0.00 0.00 0.00 00:17:24.420 =================================================================================================================== 00:17:24.420 Total : 23155.14 90.45 0.00 0.00 0.00 0.00 0.00 00:17:24.420 00:17:25.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.355 Nvme0n1 : 8.00 23144.75 90.41 0.00 0.00 0.00 0.00 0.00 00:17:25.355 =================================================================================================================== 00:17:25.355 Total : 23144.75 90.41 0.00 0.00 0.00 0.00 0.00 00:17:25.355 00:17:26.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:26.288 Nvme0n1 : 9.00 23142.89 90.40 0.00 0.00 0.00 0.00 0.00 00:17:26.288 =================================================================================================================== 00:17:26.288 Total : 23142.89 90.40 0.00 0.00 0.00 0.00 0.00 00:17:26.288 00:17:27.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.226 Nvme0n1 : 10.00 23139.80 90.39 0.00 0.00 0.00 0.00 0.00 00:17:27.226 =================================================================================================================== 00:17:27.226 Total : 23139.80 90.39 0.00 0.00 0.00 0.00 0.00 00:17:27.226 00:17:27.482 00:17:27.482 Latency(us) 00:17:27.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.482 Nvme0n1 : 10.01 23136.30 90.38 0.00 0.00 5527.61 2517.96 10899.67 00:17:27.482 =================================================================================================================== 00:17:27.482 Total : 23136.30 90.38 0.00 0.00 5527.61 2517.96 10899.67 00:17:27.482 0 00:17:27.482 00:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1971595 00:17:27.482 00:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@947 -- # '[' -z 1971595 ']' 00:17:27.482 00:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@951 -- # kill -0 1971595 00:17:27.482 00:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # uname 00:17:27.482 00:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:27.482 00:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1971595 00:17:27.482 00:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:17:27.482 00:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:17:27.482 00:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1971595' 00:17:27.482 killing process with pid 1971595 00:17:27.482 00:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # kill 1971595 00:17:27.482 Received shutdown signal, test time was about 10.000000 seconds 00:17:27.482 00:17:27.482 Latency(us) 00:17:27.482 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.482 =================================================================================================================== 00:17:27.482 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:27.482 00:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # wait 1971595 00:17:27.740 00:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:27.998 00:32:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:27.998 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:27.998 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1968005 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1968005 00:17:28.258 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1968005 Killed "${NVMF_APP[@]}" "$@" 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1974013 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1974013 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@828 -- # '[' -z 1974013 ']' 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:28.258 00:32:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:28.258 [2024-05-15 00:32:54.387687] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:17:28.258 [2024-05-15 00:32:54.387792] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.519 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.519 [2024-05-15 00:32:54.517721] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.519 [2024-05-15 00:32:54.615513] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.519 [2024-05-15 00:32:54.615559] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.519 [2024-05-15 00:32:54.615573] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.519 [2024-05-15 00:32:54.615583] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.519 [2024-05-15 00:32:54.615591] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.519 [2024-05-15 00:32:54.615622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.087 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:29.087 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@861 -- # return 0 00:17:29.087 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:29.087 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:29.087 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:29.087 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.087 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:29.345 [2024-05-15 00:32:55.262106] blobstore.c:4838:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:29.345 [2024-05-15 00:32:55.262251] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:29.345 [2024-05-15 00:32:55.262285] blobstore.c:4785:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:29.345 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:29.345 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 09e4d744-78b7-4440-a211-9c236a258f3b 00:17:29.345 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=09e4d744-78b7-4440-a211-9c236a258f3b 00:17:29.345 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:17:29.345 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:17:29.345 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:17:29.345 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:17:29.346 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:29.346 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 09e4d744-78b7-4440-a211-9c236a258f3b -t 2000 00:17:29.604 [ 00:17:29.604 { 00:17:29.604 "name": "09e4d744-78b7-4440-a211-9c236a258f3b", 00:17:29.604 "aliases": [ 00:17:29.604 "lvs/lvol" 00:17:29.604 ], 00:17:29.604 "product_name": "Logical Volume", 00:17:29.604 "block_size": 4096, 00:17:29.604 "num_blocks": 38912, 00:17:29.604 "uuid": "09e4d744-78b7-4440-a211-9c236a258f3b", 00:17:29.604 "assigned_rate_limits": { 00:17:29.604 "rw_ios_per_sec": 0, 00:17:29.604 "rw_mbytes_per_sec": 0, 00:17:29.604 "r_mbytes_per_sec": 0, 00:17:29.604 "w_mbytes_per_sec": 0 00:17:29.604 }, 00:17:29.604 "claimed": false, 00:17:29.604 "zoned": false, 00:17:29.604 "supported_io_types": { 00:17:29.604 "read": true, 00:17:29.604 "write": true, 00:17:29.604 "unmap": true, 00:17:29.604 "write_zeroes": true, 00:17:29.604 "flush": false, 00:17:29.604 "reset": true, 00:17:29.604 "compare": false, 00:17:29.604 "compare_and_write": false, 00:17:29.604 "abort": false, 00:17:29.604 "nvme_admin": false, 00:17:29.604 "nvme_io": false 00:17:29.604 }, 00:17:29.604 "driver_specific": { 00:17:29.604 "lvol": { 00:17:29.604 "lvol_store_uuid": "85ba3f37-6d8f-4987-9345-a628ff1363e0", 00:17:29.604 "base_bdev": "aio_bdev", 00:17:29.604 "thin_provision": false, 00:17:29.604 "num_allocated_clusters": 38, 00:17:29.604 "snapshot": false, 00:17:29.604 "clone": false, 00:17:29.604 "esnap_clone": false 00:17:29.604 } 00:17:29.604 } 00:17:29.604 } 00:17:29.604 ] 00:17:29.604 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:17:29.604 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:29.604 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:29.604 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:29.604 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:29.604 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:29.863 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:29.863 00:32:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:29.863 [2024-05-15 00:32:55.967926] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:29.863 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:29.863 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@649 -- # local es=0 00:17:29.863 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:29.863 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@637 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:29.863 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:29.863 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:29.863 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:29.863 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:29.863 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:17:29.863 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:29.863 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@643 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:17:29.863 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:30.123 request: 00:17:30.123 { 00:17:30.123 "uuid": "85ba3f37-6d8f-4987-9345-a628ff1363e0", 00:17:30.123 "method": "bdev_lvol_get_lvstores", 00:17:30.123 "req_id": 1 00:17:30.123 } 00:17:30.123 Got JSON-RPC error response 00:17:30.123 response: 00:17:30.123 { 00:17:30.123 "code": -19, 00:17:30.123 "message": "No such device" 00:17:30.123 } 00:17:30.123 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # es=1 00:17:30.123 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:17:30.123 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:17:30.123 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:17:30.123 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:30.123 aio_bdev 00:17:30.383 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 09e4d744-78b7-4440-a211-9c236a258f3b 00:17:30.383 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@896 -- # local bdev_name=09e4d744-78b7-4440-a211-9c236a258f3b 00:17:30.383 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_timeout= 00:17:30.383 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local i 00:17:30.383 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # [[ -z '' ]] 00:17:30.383 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # bdev_timeout=2000 00:17:30.383 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:30.383 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 09e4d744-78b7-4440-a211-9c236a258f3b -t 2000 00:17:30.640 [ 00:17:30.640 { 00:17:30.640 "name": "09e4d744-78b7-4440-a211-9c236a258f3b", 00:17:30.640 "aliases": [ 00:17:30.640 "lvs/lvol" 00:17:30.640 ], 00:17:30.640 "product_name": "Logical Volume", 00:17:30.640 "block_size": 4096, 00:17:30.640 "num_blocks": 38912, 00:17:30.640 "uuid": "09e4d744-78b7-4440-a211-9c236a258f3b", 00:17:30.640 "assigned_rate_limits": { 00:17:30.640 "rw_ios_per_sec": 0, 00:17:30.640 "rw_mbytes_per_sec": 0, 00:17:30.640 "r_mbytes_per_sec": 0, 00:17:30.640 "w_mbytes_per_sec": 0 00:17:30.641 }, 00:17:30.641 "claimed": false, 00:17:30.641 "zoned": false, 00:17:30.641 "supported_io_types": { 00:17:30.641 "read": true, 00:17:30.641 "write": true, 00:17:30.641 "unmap": true, 00:17:30.641 "write_zeroes": true, 00:17:30.641 "flush": false, 00:17:30.641 "reset": true, 00:17:30.641 "compare": false, 00:17:30.641 "compare_and_write": false, 00:17:30.641 "abort": false, 00:17:30.641 "nvme_admin": false, 00:17:30.641 "nvme_io": false 00:17:30.641 }, 00:17:30.641 "driver_specific": { 00:17:30.641 "lvol": { 00:17:30.641 "lvol_store_uuid": "85ba3f37-6d8f-4987-9345-a628ff1363e0", 00:17:30.641 "base_bdev": "aio_bdev", 00:17:30.641 "thin_provision": false, 00:17:30.641 "num_allocated_clusters": 38, 00:17:30.641 "snapshot": false, 00:17:30.641 "clone": false, 00:17:30.641 "esnap_clone": false 00:17:30.641 } 00:17:30.641 } 00:17:30.641 } 00:17:30.641 ] 00:17:30.641 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # return 0 00:17:30.641 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:30.641 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:30.641 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:30.641 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:30.641 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:30.899 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:30.899 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 09e4d744-78b7-4440-a211-9c236a258f3b 00:17:30.899 00:32:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 85ba3f37-6d8f-4987-9345-a628ff1363e0 00:17:31.156 00:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:31.156 00:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:31.156 00:17:31.156 real 0m16.723s 00:17:31.156 user 0m43.481s 00:17:31.156 sys 0m3.140s 00:17:31.156 00:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:31.156 00:32:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:31.156 ************************************ 00:17:31.156 END TEST lvs_grow_dirty 00:17:31.156 ************************************ 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@805 -- # type=--id 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # id=0 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@817 -- # for n in $shm_files 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:31.417 nvmf_trace.0 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # return 0 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:31.417 rmmod nvme_tcp 00:17:31.417 rmmod nvme_fabrics 00:17:31.417 rmmod nvme_keyring 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1974013 ']' 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1974013 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@947 -- # '[' -z 1974013 ']' 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@951 -- # kill -0 1974013 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # uname 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1974013 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1974013' 00:17:31.417 killing process with pid 1974013 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # kill 1974013 00:17:31.417 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # wait 1974013 00:17:31.987 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:31.987 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:31.987 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:31.987 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:31.988 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:31.988 00:32:57 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:31.988 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:31.988 00:32:57 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:33.904 00:32:59 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:33.904 00:17:33.904 real 0m41.302s 00:17:33.904 user 1m3.443s 00:17:33.904 sys 0m9.052s 00:17:33.904 00:32:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:33.904 00:32:59 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:33.904 ************************************ 00:17:33.904 END TEST nvmf_lvs_grow 00:17:33.904 ************************************ 00:17:33.904 00:33:00 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:33.904 00:33:00 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:33.904 00:33:00 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:33.904 00:33:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:33.904 ************************************ 00:17:33.904 START TEST nvmf_bdev_io_wait 00:17:33.904 ************************************ 00:17:33.904 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:34.163 * Looking for test storage... 00:17:34.163 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:34.163 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:34.164 00:33:00 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:39.437 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:39.437 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:39.437 Found net devices under 0000:27:00.0: cvl_0_0 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:39.437 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:39.438 Found net devices under 0000:27:00.1: cvl_0_1 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:39.438 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:39.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:39.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:17:39.696 00:17:39.696 --- 10.0.0.2 ping statistics --- 00:17:39.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.696 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:39.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:39.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:17:39.696 00:17:39.696 --- 10.0.0.1 ping statistics --- 00:17:39.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:39.696 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1979090 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1979090 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@828 -- # '[' -z 1979090 ']' 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:39.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:39.696 00:33:05 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:39.954 [2024-05-15 00:33:05.916455] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:17:39.954 [2024-05-15 00:33:05.916561] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:39.954 EAL: No free 2048 kB hugepages reported on node 1 00:17:39.954 [2024-05-15 00:33:06.036149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:40.213 [2024-05-15 00:33:06.138353] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.213 [2024-05-15 00:33:06.138390] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.213 [2024-05-15 00:33:06.138399] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.213 [2024-05-15 00:33:06.138408] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.213 [2024-05-15 00:33:06.138416] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.213 [2024-05-15 00:33:06.138539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.213 [2024-05-15 00:33:06.138622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.213 [2024-05-15 00:33:06.138662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.213 [2024-05-15 00:33:06.138672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.472 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:40.472 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@861 -- # return 0 00:17:40.472 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:40.472 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:40.472 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.733 [2024-05-15 00:33:06.766728] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.733 Malloc0 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.733 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:40.734 [2024-05-15 00:33:06.844351] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:40.734 [2024-05-15 00:33:06.844652] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1979410 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1979413 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1979414 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:40.734 { 00:17:40.734 "params": { 00:17:40.734 "name": "Nvme$subsystem", 00:17:40.734 "trtype": "$TEST_TRANSPORT", 00:17:40.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.734 "adrfam": "ipv4", 00:17:40.734 "trsvcid": "$NVMF_PORT", 00:17:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.734 "hdgst": ${hdgst:-false}, 00:17:40.734 "ddgst": ${ddgst:-false} 00:17:40.734 }, 00:17:40.734 "method": "bdev_nvme_attach_controller" 00:17:40.734 } 00:17:40.734 EOF 00:17:40.734 )") 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1979416 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:40.734 { 00:17:40.734 "params": { 00:17:40.734 "name": "Nvme$subsystem", 00:17:40.734 "trtype": "$TEST_TRANSPORT", 00:17:40.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.734 "adrfam": "ipv4", 00:17:40.734 "trsvcid": "$NVMF_PORT", 00:17:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.734 "hdgst": ${hdgst:-false}, 00:17:40.734 "ddgst": ${ddgst:-false} 00:17:40.734 }, 00:17:40.734 "method": "bdev_nvme_attach_controller" 00:17:40.734 } 00:17:40.734 EOF 00:17:40.734 )") 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:40.734 { 00:17:40.734 "params": { 00:17:40.734 "name": "Nvme$subsystem", 00:17:40.734 "trtype": "$TEST_TRANSPORT", 00:17:40.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.734 "adrfam": "ipv4", 00:17:40.734 "trsvcid": "$NVMF_PORT", 00:17:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.734 "hdgst": ${hdgst:-false}, 00:17:40.734 "ddgst": ${ddgst:-false} 00:17:40.734 }, 00:17:40.734 "method": "bdev_nvme_attach_controller" 00:17:40.734 } 00:17:40.734 EOF 00:17:40.734 )") 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1979410 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:40.734 { 00:17:40.734 "params": { 00:17:40.734 "name": "Nvme$subsystem", 00:17:40.734 "trtype": "$TEST_TRANSPORT", 00:17:40.734 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:40.734 "adrfam": "ipv4", 00:17:40.734 "trsvcid": "$NVMF_PORT", 00:17:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:40.734 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:40.734 "hdgst": ${hdgst:-false}, 00:17:40.734 "ddgst": ${ddgst:-false} 00:17:40.734 }, 00:17:40.734 "method": "bdev_nvme_attach_controller" 00:17:40.734 } 00:17:40.734 EOF 00:17:40.734 )") 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:40.734 "params": { 00:17:40.734 "name": "Nvme1", 00:17:40.734 "trtype": "tcp", 00:17:40.734 "traddr": "10.0.0.2", 00:17:40.734 "adrfam": "ipv4", 00:17:40.734 "trsvcid": "4420", 00:17:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.734 "hdgst": false, 00:17:40.734 "ddgst": false 00:17:40.734 }, 00:17:40.734 "method": "bdev_nvme_attach_controller" 00:17:40.734 }' 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:40.734 "params": { 00:17:40.734 "name": "Nvme1", 00:17:40.734 "trtype": "tcp", 00:17:40.734 "traddr": "10.0.0.2", 00:17:40.734 "adrfam": "ipv4", 00:17:40.734 "trsvcid": "4420", 00:17:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.734 "hdgst": false, 00:17:40.734 "ddgst": false 00:17:40.734 }, 00:17:40.734 "method": "bdev_nvme_attach_controller" 00:17:40.734 }' 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:40.734 "params": { 00:17:40.734 "name": "Nvme1", 00:17:40.734 "trtype": "tcp", 00:17:40.734 "traddr": "10.0.0.2", 00:17:40.734 "adrfam": "ipv4", 00:17:40.734 "trsvcid": "4420", 00:17:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.734 "hdgst": false, 00:17:40.734 "ddgst": false 00:17:40.734 }, 00:17:40.734 "method": "bdev_nvme_attach_controller" 00:17:40.734 }' 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:40.734 00:33:06 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:40.734 "params": { 00:17:40.734 "name": "Nvme1", 00:17:40.734 "trtype": "tcp", 00:17:40.734 "traddr": "10.0.0.2", 00:17:40.734 "adrfam": "ipv4", 00:17:40.734 "trsvcid": "4420", 00:17:40.734 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.734 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:40.734 "hdgst": false, 00:17:40.734 "ddgst": false 00:17:40.734 }, 00:17:40.734 "method": "bdev_nvme_attach_controller" 00:17:40.734 }' 00:17:40.994 [2024-05-15 00:33:06.921353] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:17:40.994 [2024-05-15 00:33:06.921468] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:40.994 [2024-05-15 00:33:06.927672] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:17:40.994 [2024-05-15 00:33:06.927778] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:40.994 [2024-05-15 00:33:06.932569] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:17:40.994 [2024-05-15 00:33:06.932706] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:17:40.994 [2024-05-15 00:33:06.933530] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:17:40.994 [2024-05-15 00:33:06.933675] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:40.994 EAL: No free 2048 kB hugepages reported on node 1 00:17:40.994 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.254 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.254 [2024-05-15 00:33:07.186742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.254 [2024-05-15 00:33:07.232782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.254 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.254 [2024-05-15 00:33:07.326921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:17:41.254 [2024-05-15 00:33:07.331815] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.254 [2024-05-15 00:33:07.370775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:17:41.512 [2024-05-15 00:33:07.425004] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.512 [2024-05-15 00:33:07.471369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:41.512 [2024-05-15 00:33:07.567485] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:17:41.512 Running I/O for 1 seconds... 00:17:41.770 Running I/O for 1 seconds... 00:17:41.770 Running I/O for 1 seconds... 00:17:42.028 Running I/O for 1 seconds... 00:17:42.597 00:17:42.597 Latency(us) 00:17:42.597 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.597 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:42.597 Nvme1n1 : 1.02 11537.36 45.07 0.00 0.00 10987.41 5656.79 23592.96 00:17:42.597 =================================================================================================================== 00:17:42.597 Total : 11537.36 45.07 0.00 0.00 10987.41 5656.79 23592.96 00:17:42.856 00:17:42.856 Latency(us) 00:17:42.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.856 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:42.856 Nvme1n1 : 1.01 8826.65 34.48 0.00 0.00 14446.49 6208.67 24558.75 00:17:42.856 =================================================================================================================== 00:17:42.856 Total : 8826.65 34.48 0.00 0.00 14446.49 6208.67 24558.75 00:17:42.856 00:17:42.856 Latency(us) 00:17:42.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.856 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:42.856 Nvme1n1 : 1.00 134882.09 526.88 0.00 0.00 944.88 360.02 1121.01 00:17:42.856 =================================================================================================================== 00:17:42.856 Total : 134882.09 526.88 0.00 0.00 944.88 360.02 1121.01 00:17:42.856 00:17:42.856 Latency(us) 00:17:42.856 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.856 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:42.856 Nvme1n1 : 1.00 11210.68 43.79 0.00 0.00 11388.60 3466.51 30353.52 00:17:42.856 =================================================================================================================== 00:17:42.856 Total : 11210.68 43.79 0.00 0.00 11388.60 3466.51 30353.52 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1979413 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1979414 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1979416 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.423 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.423 rmmod nvme_tcp 00:17:43.423 rmmod nvme_fabrics 00:17:43.681 rmmod nvme_keyring 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1979090 ']' 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1979090 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@947 -- # '[' -z 1979090 ']' 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@951 -- # kill -0 1979090 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # uname 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1979090 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1979090' 00:17:43.681 killing process with pid 1979090 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # kill 1979090 00:17:43.681 [2024-05-15 00:33:09.666211] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:17:43.681 00:33:09 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # wait 1979090 00:17:44.251 00:33:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:44.251 00:33:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:44.251 00:33:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:44.251 00:33:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:44.251 00:33:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:44.251 00:33:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.251 00:33:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.251 00:33:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.158 00:33:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:46.158 00:17:46.158 real 0m12.110s 00:17:46.158 user 0m24.781s 00:17:46.158 sys 0m6.184s 00:17:46.158 00:33:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # xtrace_disable 00:17:46.158 00:33:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:46.158 ************************************ 00:17:46.158 END TEST nvmf_bdev_io_wait 00:17:46.158 ************************************ 00:17:46.158 00:33:12 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:46.158 00:33:12 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:17:46.158 00:33:12 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:17:46.158 00:33:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:46.158 ************************************ 00:17:46.158 START TEST nvmf_queue_depth 00:17:46.158 ************************************ 00:17:46.158 00:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:46.419 * Looking for test storage... 00:17:46.419 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:46.419 00:33:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:51.765 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:51.765 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:51.765 Found net devices under 0000:27:00.0: cvl_0_0 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:51.765 Found net devices under 0000:27:00.1: cvl_0_1 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:51.765 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.765 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:17:51.765 00:17:51.765 --- 10.0.0.2 ping statistics --- 00:17:51.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.765 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.765 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.765 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:17:51.765 00:17:51.765 --- 10.0.0.1 ping statistics --- 00:17:51.765 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.765 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:51.765 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.766 00:33:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@721 -- # xtrace_disable 00:17:51.766 00:33:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.766 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1983917 00:17:51.766 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:51.766 00:33:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1983917 00:17:51.766 00:33:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 1983917 ']' 00:17:51.766 00:33:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.766 00:33:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:51.766 00:33:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.766 00:33:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:51.766 00:33:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.766 [2024-05-15 00:33:17.507934] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:17:51.766 [2024-05-15 00:33:17.508007] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.766 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.766 [2024-05-15 00:33:17.626764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.766 [2024-05-15 00:33:17.773416] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.766 [2024-05-15 00:33:17.773472] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.766 [2024-05-15 00:33:17.773488] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.766 [2024-05-15 00:33:17.773504] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.766 [2024-05-15 00:33:17.773516] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.766 [2024-05-15 00:33:17.773574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.332 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:52.332 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:17:52.332 00:33:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:52.332 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@727 -- # xtrace_disable 00:17:52.332 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:52.333 [2024-05-15 00:33:18.266816] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:52.333 Malloc0 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:52.333 [2024-05-15 00:33:18.350544] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:17:52.333 [2024-05-15 00:33:18.350867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1984165 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1984165 /var/tmp/bdevperf.sock 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@828 -- # '[' -z 1984165 ']' 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local max_retries=100 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:52.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@837 -- # xtrace_disable 00:17:52.333 00:33:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:52.333 [2024-05-15 00:33:18.398246] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:17:52.333 [2024-05-15 00:33:18.398310] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1984165 ] 00:17:52.333 EAL: No free 2048 kB hugepages reported on node 1 00:17:52.333 [2024-05-15 00:33:18.480443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.593 [2024-05-15 00:33:18.571554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.164 00:33:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:17:53.164 00:33:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@861 -- # return 0 00:17:53.164 00:33:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:53.164 00:33:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@560 -- # xtrace_disable 00:17:53.164 00:33:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:53.422 NVMe0n1 00:17:53.422 00:33:19 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:17:53.422 00:33:19 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:53.422 Running I/O for 10 seconds... 00:18:03.408 00:18:03.408 Latency(us) 00:18:03.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.408 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:03.408 Verification LBA range: start 0x0 length 0x4000 00:18:03.408 NVMe0n1 : 10.07 12280.42 47.97 0.00 0.00 83114.64 19453.84 78367.26 00:18:03.408 =================================================================================================================== 00:18:03.408 Total : 12280.42 47.97 0.00 0.00 83114.64 19453.84 78367.26 00:18:03.408 0 00:18:03.408 00:33:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1984165 00:18:03.408 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 1984165 ']' 00:18:03.408 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 1984165 00:18:03.408 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:18:03.408 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:03.408 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1984165 00:18:03.408 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:18:03.408 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:18:03.408 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1984165' 00:18:03.408 killing process with pid 1984165 00:18:03.408 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 1984165 00:18:03.408 Received shutdown signal, test time was about 10.000000 seconds 00:18:03.408 00:18:03.408 Latency(us) 00:18:03.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:03.408 =================================================================================================================== 00:18:03.408 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:03.408 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 1984165 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:03.979 rmmod nvme_tcp 00:18:03.979 rmmod nvme_fabrics 00:18:03.979 rmmod nvme_keyring 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1983917 ']' 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1983917 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@947 -- # '[' -z 1983917 ']' 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@951 -- # kill -0 1983917 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # uname 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:03.979 00:33:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1983917 00:18:03.979 00:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:03.979 00:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:03.979 00:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1983917' 00:18:03.979 killing process with pid 1983917 00:18:03.979 00:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # kill 1983917 00:18:03.979 [2024-05-15 00:33:30.016766] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:03.979 00:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@971 -- # wait 1983917 00:18:04.549 00:33:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:04.549 00:33:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:04.549 00:33:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:04.549 00:33:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:04.549 00:33:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:04.549 00:33:30 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.549 00:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.549 00:33:30 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.455 00:33:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:06.455 00:18:06.455 real 0m20.343s 00:18:06.455 user 0m25.374s 00:18:06.455 sys 0m5.150s 00:18:06.455 00:33:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:06.455 00:33:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:06.455 ************************************ 00:18:06.455 END TEST nvmf_queue_depth 00:18:06.455 ************************************ 00:18:06.714 00:33:32 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:06.714 00:33:32 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:06.714 00:33:32 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:06.714 00:33:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:06.714 ************************************ 00:18:06.714 START TEST nvmf_target_multipath 00:18:06.714 ************************************ 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:06.715 * Looking for test storage... 00:18:06.715 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:06.715 00:33:32 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:11.994 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:11.994 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:11.994 Found net devices under 0000:27:00.0: cvl_0_0 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.994 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:11.995 Found net devices under 0000:27:00.1: cvl_0_1 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:11.995 00:33:37 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.995 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.995 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.995 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.995 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:11.995 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:12.255 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:12.255 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:12.255 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:12.256 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:12.256 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:18:12.256 00:18:12.256 --- 10.0.0.2 ping statistics --- 00:18:12.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.256 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:12.256 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:12.256 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:18:12.256 00:18:12.256 --- 10.0.0.1 ping statistics --- 00:18:12.256 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:12.256 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:12.256 only one NIC for nvmf test 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:12.256 rmmod nvme_tcp 00:18:12.256 rmmod nvme_fabrics 00:18:12.256 rmmod nvme_keyring 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.256 00:33:38 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:14.801 00:18:14.801 real 0m7.718s 00:18:14.801 user 0m1.495s 00:18:14.801 sys 0m4.117s 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:14.801 00:33:40 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:14.801 ************************************ 00:18:14.801 END TEST nvmf_target_multipath 00:18:14.801 ************************************ 00:18:14.801 00:33:40 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:14.801 00:33:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:14.801 00:33:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:14.801 00:33:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:14.801 ************************************ 00:18:14.801 START TEST nvmf_zcopy 00:18:14.801 ************************************ 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:14.801 * Looking for test storage... 00:18:14.801 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:14.801 00:33:40 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:21.376 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:21.377 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:21.377 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:21.377 Found net devices under 0000:27:00.0: cvl_0_0 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:21.377 Found net devices under 0000:27:00.1: cvl_0_1 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:21.377 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:21.377 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:18:21.377 00:18:21.377 --- 10.0.0.2 ping statistics --- 00:18:21.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.377 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:21.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:21.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:18:21.377 00:18:21.377 --- 10.0.0.1 ping statistics --- 00:18:21.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:21.377 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1994494 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1994494 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@828 -- # '[' -z 1994494 ']' 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:21.377 00:33:47 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:21.377 [2024-05-15 00:33:47.403916] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:21.377 [2024-05-15 00:33:47.404042] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.377 EAL: No free 2048 kB hugepages reported on node 1 00:18:21.637 [2024-05-15 00:33:47.564944] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.637 [2024-05-15 00:33:47.723426] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:21.637 [2024-05-15 00:33:47.723490] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:21.637 [2024-05-15 00:33:47.723507] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:21.637 [2024-05-15 00:33:47.723526] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:21.637 [2024-05-15 00:33:47.723539] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:21.638 [2024-05-15 00:33:47.723592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@861 -- # return 0 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:22.208 [2024-05-15 00:33:48.178470] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:22.208 [2024-05-15 00:33:48.198372] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:22.208 [2024-05-15 00:33:48.198900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:22.208 malloc0 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:22.208 { 00:18:22.208 "params": { 00:18:22.208 "name": "Nvme$subsystem", 00:18:22.208 "trtype": "$TEST_TRANSPORT", 00:18:22.208 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.208 "adrfam": "ipv4", 00:18:22.208 "trsvcid": "$NVMF_PORT", 00:18:22.208 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.208 "hdgst": ${hdgst:-false}, 00:18:22.208 "ddgst": ${ddgst:-false} 00:18:22.208 }, 00:18:22.208 "method": "bdev_nvme_attach_controller" 00:18:22.208 } 00:18:22.208 EOF 00:18:22.208 )") 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:22.208 00:33:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:22.208 "params": { 00:18:22.208 "name": "Nvme1", 00:18:22.208 "trtype": "tcp", 00:18:22.208 "traddr": "10.0.0.2", 00:18:22.208 "adrfam": "ipv4", 00:18:22.208 "trsvcid": "4420", 00:18:22.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.208 "hdgst": false, 00:18:22.208 "ddgst": false 00:18:22.208 }, 00:18:22.208 "method": "bdev_nvme_attach_controller" 00:18:22.208 }' 00:18:22.208 [2024-05-15 00:33:48.360387] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:22.208 [2024-05-15 00:33:48.360534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994688 ] 00:18:22.468 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.468 [2024-05-15 00:33:48.475136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.468 [2024-05-15 00:33:48.580372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.037 Running I/O for 10 seconds... 00:18:33.011 00:18:33.011 Latency(us) 00:18:33.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:33.011 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:33.011 Verification LBA range: start 0x0 length 0x1000 00:18:33.011 Nvme1n1 : 10.01 8798.56 68.74 0.00 0.00 14509.14 2500.72 33250.90 00:18:33.011 =================================================================================================================== 00:18:33.011 Total : 8798.56 68.74 0.00 0.00 14509.14 2500.72 33250.90 00:18:33.268 00:33:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1996778 00:18:33.268 00:33:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:33.268 00:33:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:33.268 00:33:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:33.268 00:33:59 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:33.268 00:33:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:33.268 00:33:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:33.268 00:33:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:33.268 00:33:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:33.268 { 00:18:33.268 "params": { 00:18:33.268 "name": "Nvme$subsystem", 00:18:33.268 "trtype": "$TEST_TRANSPORT", 00:18:33.268 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:33.268 "adrfam": "ipv4", 00:18:33.268 "trsvcid": "$NVMF_PORT", 00:18:33.268 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:33.268 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:33.268 "hdgst": ${hdgst:-false}, 00:18:33.268 "ddgst": ${ddgst:-false} 00:18:33.268 }, 00:18:33.268 "method": "bdev_nvme_attach_controller" 00:18:33.268 } 00:18:33.268 EOF 00:18:33.268 )") 00:18:33.268 00:33:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:33.268 [2024-05-15 00:33:59.358279] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.268 [2024-05-15 00:33:59.358325] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.268 00:33:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:33.268 00:33:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:33.268 00:33:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:33.268 "params": { 00:18:33.268 "name": "Nvme1", 00:18:33.268 "trtype": "tcp", 00:18:33.268 "traddr": "10.0.0.2", 00:18:33.268 "adrfam": "ipv4", 00:18:33.268 "trsvcid": "4420", 00:18:33.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:33.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:33.268 "hdgst": false, 00:18:33.268 "ddgst": false 00:18:33.268 }, 00:18:33.268 "method": "bdev_nvme_attach_controller" 00:18:33.268 }' 00:18:33.268 [2024-05-15 00:33:59.366178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.268 [2024-05-15 00:33:59.366198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.268 [2024-05-15 00:33:59.374173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.268 [2024-05-15 00:33:59.374190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.269 [2024-05-15 00:33:59.382163] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.269 [2024-05-15 00:33:59.382179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.269 [2024-05-15 00:33:59.390164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.269 [2024-05-15 00:33:59.390179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.269 [2024-05-15 00:33:59.398168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.269 [2024-05-15 00:33:59.398182] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.269 [2024-05-15 00:33:59.406174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.269 [2024-05-15 00:33:59.406188] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.269 [2024-05-15 00:33:59.414166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.269 [2024-05-15 00:33:59.414180] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.269 [2024-05-15 00:33:59.420293] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:33.269 [2024-05-15 00:33:59.420400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996778 ] 00:18:33.269 [2024-05-15 00:33:59.422182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.269 [2024-05-15 00:33:59.422196] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.269 [2024-05-15 00:33:59.430166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.269 [2024-05-15 00:33:59.430180] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.438177] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.438192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.446185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.446200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.454173] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.454187] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.462182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.462195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.470183] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.470196] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.478179] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.478192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.486189] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.486202] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 EAL: No free 2048 kB hugepages reported on node 1 00:18:33.527 [2024-05-15 00:33:59.494185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.494199] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.502193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.502207] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.510197] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.510211] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.518187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.518201] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.526210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.526224] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.530200] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.527 [2024-05-15 00:33:59.534200] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.534213] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.542199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.542212] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.550214] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.550227] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.558203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.558216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.566216] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.566233] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.574215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.574228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.582215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.582228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.590225] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.590238] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.598223] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.598239] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.606222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.606236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.614240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.614255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.622220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.622235] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.626280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.527 [2024-05-15 00:33:59.630231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.630245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.638232] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.638247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.646230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.646245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.654239] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.654255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.662256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.662269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.670233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.670247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.678242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.678255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.527 [2024-05-15 00:33:59.686233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.527 [2024-05-15 00:33:59.686246] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.694245] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.694260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.702255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.702272] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.710272] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.710288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.718260] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.718275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.726257] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.726271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.734255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.734268] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.742263] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.742278] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.750255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.750269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.758264] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.758279] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.766269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.766283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.774261] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.774275] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.782451] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.782468] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.790466] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.790489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.798455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.798477] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.806477] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.806499] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.785 [2024-05-15 00:33:59.814459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.785 [2024-05-15 00:33:59.814479] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.822469] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.822485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.830470] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.830485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.838456] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.838472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.846471] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.846487] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.854482] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.854502] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.862478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.862501] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.870502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.870523] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.878478] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.878494] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.886528] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.886563] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.894502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.894520] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 Running I/O for 5 seconds... 00:18:33.786 [2024-05-15 00:33:59.902510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.902531] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.913256] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.913286] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.922283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.922311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.931945] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.931973] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:33.786 [2024-05-15 00:33:59.940607] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:33.786 [2024-05-15 00:33:59.940634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:33:59.950406] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:33:59.950435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:33:59.959601] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:33:59.959629] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:33:59.968823] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:33:59.968850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:33:59.977521] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:33:59.977548] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:33:59.986567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:33:59.986594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:33:59.996302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:33:59.996330] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.012015] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.012050] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.022538] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.022577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.032046] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.032077] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.040355] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.040392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.050111] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.050138] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.058968] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.058995] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.067639] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.067665] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.077990] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.078025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.086583] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.086640] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.096437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.096466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.105107] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.105133] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.114951] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.114979] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.124194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.124220] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.133150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.133176] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.142476] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.142505] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.151545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.151577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.160648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.160677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.170366] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.170392] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.179683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.179710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.188964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.188990] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.197962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.197988] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.046 [2024-05-15 00:34:00.207292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.046 [2024-05-15 00:34:00.207318] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.217023] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.217054] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.226858] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.226887] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.236160] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.236186] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.245357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.245385] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.254381] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.254407] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.263658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.263685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.273428] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.273455] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.282081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.282106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.291244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.291269] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.300408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.300436] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.309759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.309784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.319007] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.319034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.328745] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.328770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.338181] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.338210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.347384] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.347409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.356998] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.357025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.366335] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.366361] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.374649] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.374677] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.383732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.383758] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.392915] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.392945] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.402606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.402632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.411697] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.411722] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.420933] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.420962] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.307 [2024-05-15 00:34:00.429978] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.307 [2024-05-15 00:34:00.430006] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.308 [2024-05-15 00:34:00.439542] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.308 [2024-05-15 00:34:00.439573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.308 [2024-05-15 00:34:00.448810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.308 [2024-05-15 00:34:00.448836] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.308 [2024-05-15 00:34:00.458069] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.308 [2024-05-15 00:34:00.458094] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.308 [2024-05-15 00:34:00.467405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.308 [2024-05-15 00:34:00.467431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.476594] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.476623] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.485853] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.485879] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.494962] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.494989] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.504632] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.504658] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.514549] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.514581] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.523749] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.523774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.533429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.533457] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.543281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.543306] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.552436] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.552461] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.561599] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.561624] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.570795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.570821] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.580121] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.580148] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.589940] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.589968] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.599110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.599136] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.608324] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.608351] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.618315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.618342] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.627532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.627569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.636908] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.636933] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.646219] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.646246] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.655837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.655863] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.665196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.665223] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.674892] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.674918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.683510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.683540] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.693347] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.693374] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.701987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.702013] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.710648] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.710673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.719732] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.719758] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.569 [2024-05-15 00:34:00.728719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.569 [2024-05-15 00:34:00.728744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.828 [2024-05-15 00:34:00.737710] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.828 [2024-05-15 00:34:00.737739] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.828 [2024-05-15 00:34:00.747503] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.828 [2024-05-15 00:34:00.747529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.828 [2024-05-15 00:34:00.756742] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.828 [2024-05-15 00:34:00.756767] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.828 [2024-05-15 00:34:00.765895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.828 [2024-05-15 00:34:00.765919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.828 [2024-05-15 00:34:00.774918] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.828 [2024-05-15 00:34:00.774944] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.828 [2024-05-15 00:34:00.784033] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.828 [2024-05-15 00:34:00.784059] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.828 [2024-05-15 00:34:00.793731] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.828 [2024-05-15 00:34:00.793759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.828 [2024-05-15 00:34:00.802457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.828 [2024-05-15 00:34:00.802482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.828 [2024-05-15 00:34:00.811667] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.828 [2024-05-15 00:34:00.811695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.828 [2024-05-15 00:34:00.821392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.828 [2024-05-15 00:34:00.821418] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.828 [2024-05-15 00:34:00.830958] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.828 [2024-05-15 00:34:00.830986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.828 [2024-05-15 00:34:00.839532] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.839564] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.848502] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.848529] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.857603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.857629] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.867657] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.867685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.877320] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.877346] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.886759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.886787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.895967] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.895994] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.905605] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.905632] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.914169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.914195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.923085] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.923111] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.931973] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.932000] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.941032] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.941058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.950042] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.950067] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.959457] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.959485] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.969288] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.969315] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.978843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.978869] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:34.829 [2024-05-15 00:34:00.988128] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:34.829 [2024-05-15 00:34:00.988153] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:00.997459] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:00.997487] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.007266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.007293] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.017102] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.017129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.026244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.026270] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.035292] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.035319] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.045050] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.045076] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.054793] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.054818] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.064105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.064130] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.073319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.073343] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.082352] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.082377] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.091849] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.091880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.101093] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.101119] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.110705] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.110730] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.119280] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.119305] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.128814] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.128840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.138220] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.138246] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.147408] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.147435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.156758] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.156785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.165985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.166011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.175021] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.175048] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.184043] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.184070] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.193716] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.193743] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.203221] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.203247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.212618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.212644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.221776] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.221802] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.231087] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.231114] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.239056] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.239084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.088 [2024-05-15 00:34:01.249579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.088 [2024-05-15 00:34:01.249605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.258240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.258266] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.268139] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.268173] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.277821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.277854] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.286942] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.286970] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.296168] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.296195] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.305345] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.305371] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.314920] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.314946] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.323411] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.323436] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.332540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.332572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.341679] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.341704] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.350821] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.350846] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.360096] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.360124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.369295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.369320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.378837] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.378866] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.388199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.388230] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.395734] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.395759] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.406567] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.406594] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.415766] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.415792] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.425044] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.425071] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.434057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.434082] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.443110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.443141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.452671] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.452697] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.462258] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.462285] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.471437] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.471464] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.481301] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.481327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.490923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.490951] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.500348] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.500374] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.346 [2024-05-15 00:34:01.509621] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.346 [2024-05-15 00:34:01.509649] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.519082] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.519108] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.528698] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.528725] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.538533] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.538573] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.548133] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.548159] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.557287] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.557311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.566817] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.566843] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.576523] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.576548] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.585695] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.585721] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.594834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.594860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.604192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.604216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.613727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.613753] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.622923] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.622953] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.631510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.631537] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.640658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.640683] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.649957] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.649983] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.659222] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.659248] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.668824] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.668850] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.678058] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.678084] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.687718] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.687746] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.696862] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.696888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.706431] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.706457] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.715610] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.715635] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.724672] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.724698] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.734237] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.734262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.743563] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.743590] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.753210] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.753236] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.606 [2024-05-15 00:34:01.762268] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.606 [2024-05-15 00:34:01.762294] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.770843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.770868] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.779825] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.779853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.789448] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.789474] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.799072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.799103] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.807810] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.807835] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.817238] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.817265] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.826868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.826895] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.836105] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.836132] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.844950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.844977] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.854089] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.854117] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.863367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.863393] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.872982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.873014] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.882228] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.882253] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.891843] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.891870] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.900278] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.900304] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.909446] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.909471] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.919158] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.867 [2024-05-15 00:34:01.919183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.867 [2024-05-15 00:34:01.928321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-05-15 00:34:01.928345] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-05-15 00:34:01.937798] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-05-15 00:34:01.937825] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-05-15 00:34:01.947434] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-05-15 00:34:01.947459] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-05-15 00:34:01.956654] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-05-15 00:34:01.956682] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-05-15 00:34:01.966262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-05-15 00:34:01.966288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-05-15 00:34:01.975362] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-05-15 00:34:01.975390] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-05-15 00:34:01.984636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-05-15 00:34:01.984663] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-05-15 00:34:01.993803] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-05-15 00:34:01.993830] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-05-15 00:34:02.002870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-05-15 00:34:02.002896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-05-15 00:34:02.012134] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-05-15 00:34:02.012160] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-05-15 00:34:02.021675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-05-15 00:34:02.021710] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:35.868 [2024-05-15 00:34:02.030198] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:35.868 [2024-05-15 00:34:02.030222] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.039235] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.039260] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.048830] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.048858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.057439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.057465] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.067143] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.067168] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.076273] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.076300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.086057] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.086083] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.095114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.095139] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.104307] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.104332] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.113390] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.113416] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.122876] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.122901] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.132578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.132606] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.141302] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.141327] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.150833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.150859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.160074] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.160100] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.169519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.169544] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.179019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.179043] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.188757] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.188784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.197893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.197919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.207529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.207559] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.216739] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.216765] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.226185] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.226213] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.235833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.235859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.245203] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.245229] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.254368] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.254394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.263404] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.263431] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.272829] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.272857] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.281954] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.281981] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.129 [2024-05-15 00:34:02.291480] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.129 [2024-05-15 00:34:02.291505] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.300658] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.300686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.310398] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.310426] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.319493] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.319519] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.328506] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.328535] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.337510] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.337536] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.347125] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.347152] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.355584] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.355609] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.364614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.364640] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.374281] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.374307] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.383484] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.383513] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.392497] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.392522] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.401827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.401854] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.410985] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.411011] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.420079] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.420104] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.429827] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.429855] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.439039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.439066] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.448240] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.448266] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.457828] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.457853] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.467106] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.467136] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.476645] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.476672] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.485926] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.485952] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.495008] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.495039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.503981] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.504006] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.389 [2024-05-15 00:34:02.513617] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.389 [2024-05-15 00:34:02.513644] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.390 [2024-05-15 00:34:02.522747] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.390 [2024-05-15 00:34:02.522774] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.390 [2024-05-15 00:34:02.531349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.390 [2024-05-15 00:34:02.531374] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.390 [2024-05-15 00:34:02.540429] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.390 [2024-05-15 00:34:02.540457] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.390 [2024-05-15 00:34:02.550068] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.390 [2024-05-15 00:34:02.550094] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-05-15 00:34:02.558722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-05-15 00:34:02.558749] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-05-15 00:34:02.567744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-05-15 00:34:02.567770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-05-15 00:34:02.577423] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-05-15 00:34:02.577453] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-05-15 00:34:02.586929] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.649 [2024-05-15 00:34:02.586957] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.649 [2024-05-15 00:34:02.596283] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.596308] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.605870] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.605898] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.614369] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.614394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.623427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.623455] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.633081] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.633107] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.642255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.642283] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.651997] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.652025] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.661896] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.661925] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.671678] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.671708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.680975] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.680999] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.689643] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.689670] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.699295] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.699320] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.708399] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.708425] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.717919] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.717949] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.726773] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.726803] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.736613] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.736639] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.745668] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.745695] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.755312] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.755338] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.764629] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.764656] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.773746] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.773771] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.782786] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.782812] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.792439] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.792466] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.800938] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.800963] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.650 [2024-05-15 00:34:02.810559] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.650 [2024-05-15 00:34:02.810588] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.971 [2024-05-15 00:34:02.819725] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.971 [2024-05-15 00:34:02.819752] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.971 [2024-05-15 00:34:02.828889] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.971 [2024-05-15 00:34:02.828917] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.971 [2024-05-15 00:34:02.838536] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.971 [2024-05-15 00:34:02.838574] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.971 [2024-05-15 00:34:02.847754] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.971 [2024-05-15 00:34:02.847785] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.971 [2024-05-15 00:34:02.856458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.971 [2024-05-15 00:34:02.856483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.971 [2024-05-15 00:34:02.865991] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.971 [2024-05-15 00:34:02.866019] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.971 [2024-05-15 00:34:02.874660] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.971 [2024-05-15 00:34:02.874686] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.971 [2024-05-15 00:34:02.883868] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.971 [2024-05-15 00:34:02.883893] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.971 [2024-05-15 00:34:02.892885] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.971 [2024-05-15 00:34:02.892910] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.971 [2024-05-15 00:34:02.902463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.971 [2024-05-15 00:34:02.902488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.971 [2024-05-15 00:34:02.911727] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.971 [2024-05-15 00:34:02.911754] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.971 [2024-05-15 00:34:02.921285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.971 [2024-05-15 00:34:02.921311] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.971 [2024-05-15 00:34:02.930539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:02.930571] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:02.939760] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:02.939787] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:02.948331] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:02.948356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:02.957372] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:02.957398] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:02.966349] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:02.966376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:02.975647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:02.975673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:02.985097] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:02.985124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:02.993557] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:02.993582] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:03.002508] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:03.002536] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:03.011799] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:03.011826] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:03.021072] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:03.021101] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:03.030193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:03.030219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:03.039647] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:03.039673] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:03.049266] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:03.049290] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:03.058445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:03.058475] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:03.068141] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:03.068169] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:03.078019] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:03.078045] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:03.087303] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:03.087331] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:03.096614] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:03.096641] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:36.972 [2024-05-15 00:34:03.106172] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:36.972 [2024-05-15 00:34:03.106198] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.231 [2024-05-15 00:34:03.115683] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.231 [2024-05-15 00:34:03.115713] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.231 [2024-05-15 00:34:03.124936] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.231 [2024-05-15 00:34:03.124961] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.231 [2024-05-15 00:34:03.134100] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.231 [2024-05-15 00:34:03.134126] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.231 [2024-05-15 00:34:03.143609] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.231 [2024-05-15 00:34:03.143634] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.231 [2024-05-15 00:34:03.152735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.231 [2024-05-15 00:34:03.152760] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.231 [2024-05-15 00:34:03.162447] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.231 [2024-05-15 00:34:03.162472] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.231 [2024-05-15 00:34:03.171704] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.231 [2024-05-15 00:34:03.171729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.231 [2024-05-15 00:34:03.180806] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.231 [2024-05-15 00:34:03.180831] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.231 [2024-05-15 00:34:03.190114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.231 [2024-05-15 00:34:03.190141] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.231 [2024-05-15 00:34:03.198618] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.231 [2024-05-15 00:34:03.198647] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.231 [2024-05-15 00:34:03.207602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.207629] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.217336] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.217361] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.226517] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.226542] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.236357] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.236382] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.245403] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.245428] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.255217] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.255245] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.264589] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.264615] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.273527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.273560] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.283392] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.283417] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.292580] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.292607] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.302252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.302277] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.311412] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.311438] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.321039] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.321065] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.330388] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.330415] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.339522] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.339547] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.348545] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.348577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.358161] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.358187] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.366690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.366715] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.375582] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.375608] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.384615] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.384641] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.232 [2024-05-15 00:34:03.393682] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.232 [2024-05-15 00:34:03.393708] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.402637] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.402662] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.412129] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.412155] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.421344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.421370] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.430916] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.430943] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.439579] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.439604] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.448947] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.448973] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.457955] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.457981] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.466972] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.466999] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.484662] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.484691] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.493811] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.493837] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.502950] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.502977] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.511917] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.511941] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.521110] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.521137] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.530855] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.530880] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.539993] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.540020] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.549022] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.549047] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.558151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.558178] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.567417] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.567446] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.577037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.577062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.586164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.586188] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.595218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.595242] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.604561] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.604585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.613444] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.613470] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.623014] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.623039] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.492 [2024-05-15 00:34:03.632802] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.492 [2024-05-15 00:34:03.632828] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.493 [2024-05-15 00:34:03.641309] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.493 [2024-05-15 00:34:03.641335] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.493 [2024-05-15 00:34:03.651065] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.493 [2024-05-15 00:34:03.651093] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.660255] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.660288] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.669371] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.669399] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.678319] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.678346] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.687338] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.687365] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.696960] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.696986] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.706166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.706193] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.715367] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.715394] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.724277] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.724302] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.733675] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.733702] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.743455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.743483] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.752438] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.752462] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.761603] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.761630] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.770744] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.770770] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.780578] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.780605] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.790383] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.790409] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.799527] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.799557] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.808772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.808797] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.817959] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.817984] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.827562] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.827586] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.836083] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.836106] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.845190] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.845216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.854867] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.854894] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.864037] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.864062] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.873838] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.873861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.883661] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.883689] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.892767] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.892793] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.901854] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.901879] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:37.753 [2024-05-15 00:34:03.911114] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:37.753 [2024-05-15 00:34:03.911140] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.013 [2024-05-15 00:34:03.920315] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.013 [2024-05-15 00:34:03.920344] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.013 [2024-05-15 00:34:03.929861] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:03.929888] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:03.938969] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:03.938997] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:03.948084] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:03.948109] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:03.957690] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:03.957716] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:03.967275] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:03.967300] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:03.976509] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:03.976537] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:03.986098] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:03.986124] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:03.995213] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:03.995240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.004789] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.004815] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.014656] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.014683] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.024571] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.024598] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.034327] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.034352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.043703] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.043729] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.052740] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.052766] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.061848] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.061874] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.071626] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.071651] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.080808] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.080840] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.089872] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.089896] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.098895] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.098919] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.107750] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.107778] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.117310] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.117337] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.126737] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.126764] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.136458] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.136488] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.145150] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.145179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.154305] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.154332] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.163409] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.163435] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.014 [2024-05-15 00:34:04.173253] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.014 [2024-05-15 00:34:04.173281] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.182537] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.182569] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.192166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.192194] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.200592] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.200618] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.210060] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.210090] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.220285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.220314] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.229548] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.229580] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.238540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.238585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.247719] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.247744] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.256964] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.257000] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.265633] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.265659] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.274714] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.274742] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.283818] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.283845] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.293519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.293545] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.302836] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.302861] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.311344] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.311372] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.320455] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.320482] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.329539] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.329577] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.338606] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.338633] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.348166] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.348193] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.357499] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.357524] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.366812] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.366839] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.376231] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.376257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.386030] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.386058] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.395285] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.395312] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.404931] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.404959] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.413463] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.413490] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.423054] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.423080] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.272 [2024-05-15 00:34:04.432211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.272 [2024-05-15 00:34:04.432242] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.441461] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.441489] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.451233] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.451262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.460326] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.460356] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.470103] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.470129] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.479353] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.479382] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.488795] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.488820] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.497772] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.497800] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.506932] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.506959] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.515901] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.515927] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.525262] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.525287] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.534623] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.534650] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.543834] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.543860] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.553374] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.553401] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.562558] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.562585] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.571636] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.571664] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.581187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.581213] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.590519] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.590545] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.600193] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.600218] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.609321] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.609352] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.618759] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.618784] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.628006] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.628034] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.637071] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.637097] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.646735] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.646760] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.655833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.655858] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.665151] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.665177] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.674840] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.674864] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.684568] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.684595] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.531 [2024-05-15 00:34:04.693723] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.531 [2024-05-15 00:34:04.693748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.703445] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.703471] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.712602] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.712630] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.722113] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.722139] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.731269] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.731297] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.740350] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.740376] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.749587] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.749612] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.758634] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.758658] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.767529] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.767560] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.776540] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.776572] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.785722] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.785748] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.794659] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.794685] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.803796] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.803824] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.812893] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.812918] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.822405] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.822433] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.831003] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.831030] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.840790] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.840817] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.849987] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.850018] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.858982] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.859008] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.868619] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.868647] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.877833] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.877859] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.887427] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.887454] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.896520] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.896546] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.903199] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.903224] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 00:18:38.790 Latency(us) 00:18:38.790 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:38.790 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:38.790 Nvme1n1 : 5.01 17172.34 134.16 0.00 0.00 7447.32 3311.29 15590.67 00:18:38.790 =================================================================================================================== 00:18:38.790 Total : 17172.34 134.16 0.00 0.00 7447.32 3311.29 15590.67 00:18:38.790 [2024-05-15 00:34:04.911180] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.911203] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.919170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.919190] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.927182] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.927197] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.935170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.935184] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.943170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.943185] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:38.790 [2024-05-15 00:34:04.951170] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:38.790 [2024-05-15 00:34:04.951183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:04.959164] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:04.959179] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:04.967174] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:04.967187] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:04.975178] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:04.975192] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:04.983169] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:04.983183] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:04.991187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:04.991200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:04.999175] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:04.999189] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.007186] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.007200] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.015187] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.015201] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.023192] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.023206] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.031196] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.031210] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.039209] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.039223] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.047195] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.047208] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.055202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.055216] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.063194] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.063208] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.071206] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.071220] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.079207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.079220] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.087202] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.087215] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.095211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.095225] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.103211] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.103224] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.111207] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.111219] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.119230] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.119243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.127215] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.127228] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.135229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.135243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.143227] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.143240] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.151218] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.151231] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.159229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.159243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.167234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.167249] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.175229] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.175243] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.183234] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.183247] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.191226] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.191239] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.199248] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.199262] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.049 [2024-05-15 00:34:05.207244] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.049 [2024-05-15 00:34:05.207257] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.310 [2024-05-15 00:34:05.215249] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.310 [2024-05-15 00:34:05.215264] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.310 [2024-05-15 00:34:05.223252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.310 [2024-05-15 00:34:05.223271] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.310 [2024-05-15 00:34:05.231252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.310 [2024-05-15 00:34:05.231266] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.310 [2024-05-15 00:34:05.239241] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.310 [2024-05-15 00:34:05.239255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.310 [2024-05-15 00:34:05.247252] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.310 [2024-05-15 00:34:05.247265] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.310 [2024-05-15 00:34:05.255242] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.310 [2024-05-15 00:34:05.255255] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.310 [2024-05-15 00:34:05.263265] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.310 [2024-05-15 00:34:05.263280] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.310 [2024-05-15 00:34:05.271274] subsystem.c:2029:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:39.310 [2024-05-15 00:34:05.271289] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.310 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1996778) - No such process 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1996778 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:39.310 delay0 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:39.310 00:34:05 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:39.310 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.310 [2024-05-15 00:34:05.468872] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:45.881 Initializing NVMe Controllers 00:18:45.881 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:45.881 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:45.881 Initialization complete. Launching workers. 00:18:45.881 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1111 00:18:45.882 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1383, failed to submit 48 00:18:45.882 success 1189, unsuccess 194, failed 0 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:45.882 rmmod nvme_tcp 00:18:45.882 rmmod nvme_fabrics 00:18:45.882 rmmod nvme_keyring 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1994494 ']' 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1994494 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@947 -- # '[' -z 1994494 ']' 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@951 -- # kill -0 1994494 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # uname 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 1994494 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@965 -- # echo 'killing process with pid 1994494' 00:18:45.882 killing process with pid 1994494 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # kill 1994494 00:18:45.882 [2024-05-15 00:34:11.843984] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:18:45.882 00:34:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@971 -- # wait 1994494 00:18:46.451 00:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:46.451 00:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:46.451 00:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:46.451 00:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:46.451 00:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:46.451 00:34:12 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:46.451 00:34:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:46.451 00:34:12 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.356 00:34:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:48.356 00:18:48.356 real 0m33.980s 00:18:48.356 user 0m47.465s 00:18:48.356 sys 0m9.152s 00:18:48.356 00:34:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # xtrace_disable 00:18:48.356 00:34:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:48.356 ************************************ 00:18:48.356 END TEST nvmf_zcopy 00:18:48.356 ************************************ 00:18:48.356 00:34:14 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:48.356 00:34:14 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:18:48.356 00:34:14 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:18:48.356 00:34:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:48.356 ************************************ 00:18:48.356 START TEST nvmf_nmic 00:18:48.356 ************************************ 00:18:48.356 00:34:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:48.615 * Looking for test storage... 00:18:48.615 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.615 00:34:14 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:48.616 00:34:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:55.186 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:55.187 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:55.187 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:55.187 Found net devices under 0000:27:00.0: cvl_0_0 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:55.187 Found net devices under 0000:27:00.1: cvl_0_1 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:55.187 00:34:20 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:55.187 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:55.187 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:18:55.187 00:18:55.187 --- 10.0.0.2 ping statistics --- 00:18:55.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.187 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:55.187 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:55.187 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:18:55.187 00:18:55.187 --- 10.0.0.1 ping statistics --- 00:18:55.187 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:55.187 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@721 -- # xtrace_disable 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2003348 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2003348 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@828 -- # '[' -z 2003348 ']' 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local max_retries=100 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@837 -- # xtrace_disable 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:55.187 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:55.187 [2024-05-15 00:34:21.269407] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:18:55.187 [2024-05-15 00:34:21.269530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:55.446 EAL: No free 2048 kB hugepages reported on node 1 00:18:55.446 [2024-05-15 00:34:21.408385] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:55.446 [2024-05-15 00:34:21.508067] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:55.446 [2024-05-15 00:34:21.508113] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:55.446 [2024-05-15 00:34:21.508123] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:55.446 [2024-05-15 00:34:21.508132] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:55.446 [2024-05-15 00:34:21.508141] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:55.446 [2024-05-15 00:34:21.508227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:55.447 [2024-05-15 00:34:21.508335] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.447 [2024-05-15 00:34:21.508442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:55.447 [2024-05-15 00:34:21.508451] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.013 00:34:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:18:56.013 00:34:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@861 -- # return 0 00:18:56.013 00:34:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:56.013 00:34:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@727 -- # xtrace_disable 00:18:56.013 00:34:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.013 [2024-05-15 00:34:22.017443] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.013 Malloc0 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.013 [2024-05-15 00:34:22.083535] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:18:56.013 [2024-05-15 00:34:22.083909] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:56.013 test case1: single bdev can't be used in multiple subsystems 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.013 [2024-05-15 00:34:22.107625] bdev.c:8030:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:56.013 [2024-05-15 00:34:22.107657] subsystem.c:2063:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:56.013 [2024-05-15 00:34:22.107675] nvmf_rpc.c:1536:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:56.013 request: 00:18:56.013 { 00:18:56.013 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:56.013 "namespace": { 00:18:56.013 "bdev_name": "Malloc0", 00:18:56.013 "no_auto_visible": false 00:18:56.013 }, 00:18:56.013 "method": "nvmf_subsystem_add_ns", 00:18:56.013 "req_id": 1 00:18:56.013 } 00:18:56.013 Got JSON-RPC error response 00:18:56.013 response: 00:18:56.013 { 00:18:56.013 "code": -32602, 00:18:56.013 "message": "Invalid parameters" 00:18:56.013 } 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:56.013 Adding namespace failed - expected result. 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:56.013 test case2: host connect to nvmf target in multiple paths 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@560 -- # xtrace_disable 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:56.013 [2024-05-15 00:34:22.115738] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:18:56.013 00:34:22 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:57.914 00:34:23 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:58.852 00:34:25 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:58.852 00:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1195 -- # local i=0 00:18:58.852 00:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.852 00:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1197 -- # [[ -n '' ]] 00:18:58.852 00:34:25 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1202 -- # sleep 2 00:19:01.387 00:34:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:19:01.387 00:34:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:19:01.388 00:34:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:19:01.388 00:34:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1204 -- # nvme_devices=1 00:19:01.388 00:34:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:19:01.388 00:34:27 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # return 0 00:19:01.388 00:34:27 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:01.388 [global] 00:19:01.388 thread=1 00:19:01.388 invalidate=1 00:19:01.388 rw=write 00:19:01.388 time_based=1 00:19:01.388 runtime=1 00:19:01.388 ioengine=libaio 00:19:01.388 direct=1 00:19:01.388 bs=4096 00:19:01.388 iodepth=1 00:19:01.388 norandommap=0 00:19:01.388 numjobs=1 00:19:01.388 00:19:01.388 verify_dump=1 00:19:01.388 verify_backlog=512 00:19:01.388 verify_state_save=0 00:19:01.388 do_verify=1 00:19:01.388 verify=crc32c-intel 00:19:01.388 [job0] 00:19:01.388 filename=/dev/nvme0n1 00:19:01.388 Could not set queue depth (nvme0n1) 00:19:01.388 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:01.388 fio-3.35 00:19:01.388 Starting 1 thread 00:19:02.765 00:19:02.765 job0: (groupid=0, jobs=1): err= 0: pid=2004734: Wed May 15 00:34:28 2024 00:19:02.765 read: IOPS=22, BW=90.5KiB/s (92.6kB/s)(92.0KiB/1017msec) 00:19:02.765 slat (nsec): min=6095, max=36325, avg=29534.78, stdev=7897.69 00:19:02.765 clat (usec): min=40778, max=42176, avg=41004.33, stdev=276.39 00:19:02.765 lat (usec): min=40811, max=42213, avg=41033.87, stdev=276.98 00:19:02.765 clat percentiles (usec): 00:19:02.765 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:19:02.765 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:02.765 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:02.765 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:02.765 | 99.99th=[42206] 00:19:02.765 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:19:02.765 slat (nsec): min=4302, max=57599, avg=5711.81, stdev=2805.08 00:19:02.765 clat (usec): min=116, max=499, avg=134.46, stdev=28.03 00:19:02.765 lat (usec): min=122, max=557, avg=140.17, stdev=30.03 00:19:02.765 clat percentiles (usec): 00:19:02.765 | 1.00th=[ 120], 5.00th=[ 122], 10.00th=[ 124], 20.00th=[ 125], 00:19:02.765 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 128], 60.00th=[ 129], 00:19:02.765 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 137], 95.00th=[ 215], 00:19:02.765 | 99.00th=[ 225], 99.50th=[ 231], 99.90th=[ 498], 99.95th=[ 498], 00:19:02.765 | 99.99th=[ 498] 00:19:02.765 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:02.765 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:02.765 lat (usec) : 250=95.51%, 500=0.19% 00:19:02.765 lat (msec) : 50=4.30% 00:19:02.765 cpu : usr=0.30%, sys=0.30%, ctx=535, majf=0, minf=1 00:19:02.765 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:02.765 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.765 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.765 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.765 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:02.765 00:19:02.765 Run status group 0 (all jobs): 00:19:02.765 READ: bw=90.5KiB/s (92.6kB/s), 90.5KiB/s-90.5KiB/s (92.6kB/s-92.6kB/s), io=92.0KiB (94.2kB), run=1017-1017msec 00:19:02.765 WRITE: bw=2014KiB/s (2062kB/s), 2014KiB/s-2014KiB/s (2062kB/s-2062kB/s), io=2048KiB (2097kB), run=1017-1017msec 00:19:02.765 00:19:02.765 Disk stats (read/write): 00:19:02.765 nvme0n1: ios=70/512, merge=0/0, ticks=847/68, in_queue=915, util=91.48% 00:19:02.765 00:34:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:02.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:02.765 00:34:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:02.765 00:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1216 -- # local i=0 00:19:02.765 00:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:19:02.765 00:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:02.765 00:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:19:02.765 00:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:02.765 00:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1228 -- # return 0 00:19:02.765 00:34:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:02.765 00:34:28 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:19:02.765 00:34:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:02.765 00:34:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:19:03.024 00:34:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:03.024 00:34:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:19:03.024 00:34:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:03.024 00:34:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:03.024 rmmod nvme_tcp 00:19:03.024 rmmod nvme_fabrics 00:19:03.024 rmmod nvme_keyring 00:19:03.024 00:34:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:03.024 00:34:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:19:03.024 00:34:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:19:03.024 00:34:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2003348 ']' 00:19:03.024 00:34:28 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2003348 00:19:03.024 00:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@947 -- # '[' -z 2003348 ']' 00:19:03.024 00:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@951 -- # kill -0 2003348 00:19:03.024 00:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # uname 00:19:03.024 00:34:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:03.024 00:34:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2003348 00:19:03.024 00:34:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:19:03.024 00:34:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:19:03.024 00:34:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2003348' 00:19:03.024 killing process with pid 2003348 00:19:03.024 00:34:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # kill 2003348 00:19:03.024 [2024-05-15 00:34:29.042162] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:03.024 00:34:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@971 -- # wait 2003348 00:19:03.589 00:34:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:03.589 00:34:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:03.590 00:34:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:03.590 00:34:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:03.590 00:34:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:03.590 00:34:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:03.590 00:34:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:03.590 00:34:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.490 00:34:31 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:05.490 00:19:05.490 real 0m17.143s 00:19:05.490 user 0m47.805s 00:19:05.490 sys 0m5.835s 00:19:05.490 00:34:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:05.490 00:34:31 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:19:05.490 ************************************ 00:19:05.490 END TEST nvmf_nmic 00:19:05.490 ************************************ 00:19:05.750 00:34:31 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:05.750 00:34:31 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:05.750 00:34:31 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:05.750 00:34:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:05.750 ************************************ 00:19:05.750 START TEST nvmf_fio_target 00:19:05.750 ************************************ 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:05.750 * Looking for test storage... 00:19:05.750 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:05.750 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:05.751 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:05.751 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.751 00:34:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.751 00:34:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.751 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:19:05.751 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:05.751 00:34:31 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:05.751 00:34:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:11.016 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:11.016 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.016 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:11.017 Found net devices under 0000:27:00.0: cvl_0_0 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:11.017 Found net devices under 0000:27:00.1: cvl_0_1 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:11.017 00:34:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:11.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:11.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:19:11.017 00:19:11.017 --- 10.0.0.2 ping statistics --- 00:19:11.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.017 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:11.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:11.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:19:11.017 00:19:11.017 --- 10.0.0.1 ping statistics --- 00:19:11.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:11.017 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2009083 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2009083 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@828 -- # '[' -z 2009083 ']' 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.017 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:11.278 [2024-05-15 00:34:37.234910] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:11.278 [2024-05-15 00:34:37.235018] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.278 EAL: No free 2048 kB hugepages reported on node 1 00:19:11.278 [2024-05-15 00:34:37.361220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:11.538 [2024-05-15 00:34:37.457538] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.538 [2024-05-15 00:34:37.457578] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.538 [2024-05-15 00:34:37.457588] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.538 [2024-05-15 00:34:37.457598] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.538 [2024-05-15 00:34:37.457605] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.538 [2024-05-15 00:34:37.457724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.538 [2024-05-15 00:34:37.457808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.538 [2024-05-15 00:34:37.457912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.538 [2024-05-15 00:34:37.457924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:11.799 00:34:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:11.799 00:34:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@861 -- # return 0 00:19:11.799 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:11.799 00:34:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:11.799 00:34:37 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.059 00:34:37 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.059 00:34:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:12.059 [2024-05-15 00:34:38.093558] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.059 00:34:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:12.320 00:34:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:12.320 00:34:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:12.579 00:34:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:12.579 00:34:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:12.579 00:34:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:12.579 00:34:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:12.837 00:34:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:12.837 00:34:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:13.096 00:34:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:13.096 00:34:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:13.096 00:34:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:13.355 00:34:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:13.355 00:34:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:13.616 00:34:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:13.616 00:34:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:13.616 00:34:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:13.935 00:34:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:13.935 00:34:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:13.935 00:34:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:13.935 00:34:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:14.192 00:34:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:14.192 [2024-05-15 00:34:40.279650] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:14.192 [2024-05-15 00:34:40.279984] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:14.193 00:34:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:14.450 00:34:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:14.450 00:34:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:16.378 00:34:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:16.378 00:34:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1195 -- # local i=0 00:19:16.378 00:34:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1196 -- # local nvme_device_counter=1 nvme_devices=0 00:19:16.378 00:34:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1197 -- # [[ -n 4 ]] 00:19:16.378 00:34:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # nvme_device_counter=4 00:19:16.378 00:34:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1202 -- # sleep 2 00:19:18.279 00:34:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1203 -- # (( i++ <= 15 )) 00:19:18.279 00:34:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # lsblk -l -o NAME,SERIAL 00:19:18.279 00:34:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # grep -c SPDKISFASTANDAWESOME 00:19:18.279 00:34:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1204 -- # nvme_devices=4 00:19:18.279 00:34:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # (( nvme_devices == nvme_device_counter )) 00:19:18.279 00:34:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # return 0 00:19:18.279 00:34:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:18.279 [global] 00:19:18.279 thread=1 00:19:18.279 invalidate=1 00:19:18.279 rw=write 00:19:18.279 time_based=1 00:19:18.279 runtime=1 00:19:18.279 ioengine=libaio 00:19:18.279 direct=1 00:19:18.279 bs=4096 00:19:18.279 iodepth=1 00:19:18.279 norandommap=0 00:19:18.279 numjobs=1 00:19:18.279 00:19:18.279 verify_dump=1 00:19:18.279 verify_backlog=512 00:19:18.279 verify_state_save=0 00:19:18.279 do_verify=1 00:19:18.279 verify=crc32c-intel 00:19:18.279 [job0] 00:19:18.279 filename=/dev/nvme0n1 00:19:18.279 [job1] 00:19:18.279 filename=/dev/nvme0n2 00:19:18.279 [job2] 00:19:18.279 filename=/dev/nvme0n3 00:19:18.279 [job3] 00:19:18.279 filename=/dev/nvme0n4 00:19:18.279 Could not set queue depth (nvme0n1) 00:19:18.279 Could not set queue depth (nvme0n2) 00:19:18.279 Could not set queue depth (nvme0n3) 00:19:18.279 Could not set queue depth (nvme0n4) 00:19:18.537 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:18.537 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:18.537 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:18.537 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:18.537 fio-3.35 00:19:18.537 Starting 4 threads 00:19:19.911 00:19:19.911 job0: (groupid=0, jobs=1): err= 0: pid=2010651: Wed May 15 00:34:45 2024 00:19:19.911 read: IOPS=22, BW=89.5KiB/s (91.6kB/s)(92.0KiB/1028msec) 00:19:19.911 slat (nsec): min=5714, max=31601, avg=27429.26, stdev=7047.38 00:19:19.911 clat (usec): min=40757, max=41066, avg=40962.92, stdev=65.78 00:19:19.911 lat (usec): min=40788, max=41077, avg=40990.35, stdev=62.07 00:19:19.911 clat percentiles (usec): 00:19:19.911 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:19:19.911 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:19.911 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:19.911 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:19.911 | 99.99th=[41157] 00:19:19.911 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:19:19.911 slat (nsec): min=4873, max=45084, avg=6502.54, stdev=2937.55 00:19:19.911 clat (usec): min=106, max=454, avg=157.47, stdev=38.68 00:19:19.911 lat (usec): min=112, max=499, avg=163.97, stdev=40.29 00:19:19.911 clat percentiles (usec): 00:19:19.911 | 1.00th=[ 116], 5.00th=[ 121], 10.00th=[ 125], 20.00th=[ 131], 00:19:19.911 | 30.00th=[ 137], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 155], 00:19:19.911 | 70.00th=[ 161], 80.00th=[ 174], 90.00th=[ 200], 95.00th=[ 247], 00:19:19.911 | 99.00th=[ 269], 99.50th=[ 351], 99.90th=[ 457], 99.95th=[ 457], 00:19:19.911 | 99.99th=[ 457] 00:19:19.911 bw ( KiB/s): min= 4096, max= 4096, per=25.88%, avg=4096.00, stdev= 0.00, samples=1 00:19:19.911 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:19.911 lat (usec) : 250=92.71%, 500=2.99% 00:19:19.911 lat (msec) : 50=4.30% 00:19:19.911 cpu : usr=0.00%, sys=0.58%, ctx=536, majf=0, minf=1 00:19:19.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.911 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:19.911 job1: (groupid=0, jobs=1): err= 0: pid=2010652: Wed May 15 00:34:45 2024 00:19:19.911 read: IOPS=22, BW=88.9KiB/s (91.0kB/s)(92.0KiB/1035msec) 00:19:19.911 slat (nsec): min=5611, max=32401, avg=24230.61, stdev=7160.13 00:19:19.911 clat (usec): min=40692, max=41628, avg=40985.49, stdev=177.86 00:19:19.911 lat (usec): min=40724, max=41634, avg=41009.72, stdev=173.12 00:19:19.911 clat percentiles (usec): 00:19:19.911 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:19:19.911 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:19.911 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:19.911 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:19.911 | 99.99th=[41681] 00:19:19.911 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:19:19.911 slat (nsec): min=4417, max=75906, avg=6051.48, stdev=4074.79 00:19:19.911 clat (usec): min=119, max=672, avg=170.91, stdev=42.65 00:19:19.911 lat (usec): min=125, max=748, avg=176.96, stdev=45.40 00:19:19.911 clat percentiles (usec): 00:19:19.911 | 1.00th=[ 123], 5.00th=[ 131], 10.00th=[ 137], 20.00th=[ 145], 00:19:19.911 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 165], 60.00th=[ 172], 00:19:19.911 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 204], 95.00th=[ 225], 00:19:19.911 | 99.00th=[ 338], 99.50th=[ 355], 99.90th=[ 676], 99.95th=[ 676], 00:19:19.911 | 99.99th=[ 676] 00:19:19.911 bw ( KiB/s): min= 4096, max= 4096, per=25.88%, avg=4096.00, stdev= 0.00, samples=1 00:19:19.911 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:19.911 lat (usec) : 250=91.96%, 500=3.55%, 750=0.19% 00:19:19.911 lat (msec) : 50=4.30% 00:19:19.911 cpu : usr=0.48%, sys=0.10%, ctx=535, majf=0, minf=1 00:19:19.911 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.911 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.911 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:19.911 job2: (groupid=0, jobs=1): err= 0: pid=2010653: Wed May 15 00:34:45 2024 00:19:19.911 read: IOPS=2369, BW=9479KiB/s (9706kB/s)(9488KiB/1001msec) 00:19:19.911 slat (nsec): min=3325, max=32306, avg=5786.08, stdev=1062.60 00:19:19.911 clat (usec): min=142, max=374, avg=224.78, stdev=26.24 00:19:19.911 lat (usec): min=145, max=407, avg=230.57, stdev=26.41 00:19:19.911 clat percentiles (usec): 00:19:19.912 | 1.00th=[ 163], 5.00th=[ 186], 10.00th=[ 198], 20.00th=[ 208], 00:19:19.912 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 227], 00:19:19.912 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 260], 95.00th=[ 273], 00:19:19.912 | 99.00th=[ 310], 99.50th=[ 334], 99.90th=[ 367], 99.95th=[ 375], 00:19:19.912 | 99.99th=[ 375] 00:19:19.912 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:19:19.912 slat (nsec): min=3307, max=55455, avg=6665.39, stdev=2245.08 00:19:19.912 clat (usec): min=94, max=651, avg=166.99, stdev=52.77 00:19:19.912 lat (usec): min=97, max=706, avg=173.66, stdev=53.91 00:19:19.912 clat percentiles (usec): 00:19:19.912 | 1.00th=[ 111], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 127], 00:19:19.912 | 30.00th=[ 131], 40.00th=[ 137], 50.00th=[ 143], 60.00th=[ 157], 00:19:19.912 | 70.00th=[ 174], 80.00th=[ 229], 90.00th=[ 255], 95.00th=[ 265], 00:19:19.912 | 99.00th=[ 306], 99.50th=[ 338], 99.90th=[ 375], 99.95th=[ 433], 00:19:19.912 | 99.99th=[ 652] 00:19:19.912 bw ( KiB/s): min=11992, max=11992, per=75.76%, avg=11992.00, stdev= 0.00, samples=1 00:19:19.912 iops : min= 2998, max= 2998, avg=2998.00, stdev= 0.00, samples=1 00:19:19.912 lat (usec) : 100=0.04%, 250=87.08%, 500=12.85%, 750=0.02% 00:19:19.912 cpu : usr=1.80%, sys=4.40%, ctx=4933, majf=0, minf=1 00:19:19.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.912 issued rwts: total=2372,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:19.912 job3: (groupid=0, jobs=1): err= 0: pid=2010654: Wed May 15 00:34:45 2024 00:19:19.912 read: IOPS=22, BW=89.6KiB/s (91.7kB/s)(92.0KiB/1027msec) 00:19:19.912 slat (nsec): min=6373, max=31366, avg=27694.74, stdev=6954.66 00:19:19.912 clat (usec): min=40754, max=41506, avg=40974.83, stdev=137.73 00:19:19.912 lat (usec): min=40785, max=41512, avg=41002.52, stdev=132.07 00:19:19.912 clat percentiles (usec): 00:19:19.912 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:19:19.912 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:19.912 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:19.912 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:19.912 | 99.99th=[41681] 00:19:19.912 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:19:19.912 slat (nsec): min=4687, max=52456, avg=6247.80, stdev=3082.10 00:19:19.912 clat (usec): min=100, max=271, avg=155.37, stdev=26.82 00:19:19.912 lat (usec): min=108, max=324, avg=161.62, stdev=28.23 00:19:19.912 clat percentiles (usec): 00:19:19.912 | 1.00th=[ 114], 5.00th=[ 119], 10.00th=[ 125], 20.00th=[ 135], 00:19:19.912 | 30.00th=[ 141], 40.00th=[ 147], 50.00th=[ 153], 60.00th=[ 159], 00:19:19.912 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 190], 95.00th=[ 202], 00:19:19.912 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 273], 99.95th=[ 273], 00:19:19.912 | 99.99th=[ 273] 00:19:19.912 bw ( KiB/s): min= 4096, max= 4096, per=25.88%, avg=4096.00, stdev= 0.00, samples=1 00:19:19.912 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:19.912 lat (usec) : 250=94.58%, 500=1.12% 00:19:19.912 lat (msec) : 50=4.30% 00:19:19.912 cpu : usr=0.19%, sys=0.29%, ctx=535, majf=0, minf=1 00:19:19.912 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.912 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.912 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:19.912 00:19:19.912 Run status group 0 (all jobs): 00:19:19.912 READ: bw=9434KiB/s (9660kB/s), 88.9KiB/s-9479KiB/s (91.0kB/s-9706kB/s), io=9764KiB (9998kB), run=1001-1035msec 00:19:19.912 WRITE: bw=15.5MiB/s (16.2MB/s), 1979KiB/s-9.99MiB/s (2026kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1035msec 00:19:19.912 00:19:19.912 Disk stats (read/write): 00:19:19.912 nvme0n1: ios=52/512, merge=0/0, ticks=1728/75, in_queue=1803, util=99.30% 00:19:19.912 nvme0n2: ios=50/512, merge=0/0, ticks=746/88, in_queue=834, util=87.92% 00:19:19.912 nvme0n3: ios=2072/2253, merge=0/0, ticks=1444/352, in_queue=1796, util=99.59% 00:19:19.912 nvme0n4: ios=45/512, merge=0/0, ticks=1463/76, in_queue=1539, util=96.66% 00:19:19.912 00:34:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:19.912 [global] 00:19:19.912 thread=1 00:19:19.912 invalidate=1 00:19:19.912 rw=randwrite 00:19:19.912 time_based=1 00:19:19.912 runtime=1 00:19:19.912 ioengine=libaio 00:19:19.912 direct=1 00:19:19.912 bs=4096 00:19:19.912 iodepth=1 00:19:19.912 norandommap=0 00:19:19.912 numjobs=1 00:19:19.912 00:19:19.912 verify_dump=1 00:19:19.912 verify_backlog=512 00:19:19.912 verify_state_save=0 00:19:19.912 do_verify=1 00:19:19.912 verify=crc32c-intel 00:19:19.912 [job0] 00:19:19.912 filename=/dev/nvme0n1 00:19:19.912 [job1] 00:19:19.912 filename=/dev/nvme0n2 00:19:19.912 [job2] 00:19:19.912 filename=/dev/nvme0n3 00:19:19.912 [job3] 00:19:19.912 filename=/dev/nvme0n4 00:19:19.912 Could not set queue depth (nvme0n1) 00:19:19.912 Could not set queue depth (nvme0n2) 00:19:19.912 Could not set queue depth (nvme0n3) 00:19:19.912 Could not set queue depth (nvme0n4) 00:19:20.169 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:20.169 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:20.169 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:20.169 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:20.169 fio-3.35 00:19:20.169 Starting 4 threads 00:19:21.553 00:19:21.553 job0: (groupid=0, jobs=1): err= 0: pid=2011132: Wed May 15 00:34:47 2024 00:19:21.553 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:19:21.553 slat (nsec): min=3149, max=45775, avg=6858.13, stdev=6360.07 00:19:21.553 clat (usec): min=125, max=536, avg=221.65, stdev=78.10 00:19:21.553 lat (usec): min=129, max=574, avg=228.51, stdev=83.59 00:19:21.553 clat percentiles (usec): 00:19:21.553 | 1.00th=[ 139], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 169], 00:19:21.553 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 196], 60.00th=[ 217], 00:19:21.553 | 70.00th=[ 241], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 449], 00:19:21.553 | 99.00th=[ 502], 99.50th=[ 510], 99.90th=[ 537], 99.95th=[ 537], 00:19:21.553 | 99.99th=[ 537] 00:19:21.553 write: IOPS=2863, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec); 0 zone resets 00:19:21.553 slat (nsec): min=4274, max=39805, avg=6851.66, stdev=3183.46 00:19:21.553 clat (usec): min=83, max=546, avg=134.41, stdev=34.85 00:19:21.553 lat (usec): min=90, max=570, avg=141.26, stdev=36.36 00:19:21.553 clat percentiles (usec): 00:19:21.553 | 1.00th=[ 95], 5.00th=[ 105], 10.00th=[ 112], 20.00th=[ 117], 00:19:21.553 | 30.00th=[ 120], 40.00th=[ 122], 50.00th=[ 125], 60.00th=[ 128], 00:19:21.553 | 70.00th=[ 133], 80.00th=[ 141], 90.00th=[ 174], 95.00th=[ 219], 00:19:21.553 | 99.00th=[ 255], 99.50th=[ 289], 99.90th=[ 375], 99.95th=[ 424], 00:19:21.553 | 99.99th=[ 545] 00:19:21.553 bw ( KiB/s): min=12288, max=12288, per=56.10%, avg=12288.00, stdev= 0.00, samples=1 00:19:21.553 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:19:21.553 lat (usec) : 100=1.55%, 250=86.49%, 500=11.44%, 750=0.52% 00:19:21.553 cpu : usr=1.30%, sys=4.20%, ctx=5427, majf=0, minf=1 00:19:21.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.553 issued rwts: total=2560,2866,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.553 job1: (groupid=0, jobs=1): err= 0: pid=2011133: Wed May 15 00:34:47 2024 00:19:21.553 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:19:21.553 slat (nsec): min=3294, max=47286, avg=9215.07, stdev=8680.99 00:19:21.553 clat (usec): min=179, max=41460, avg=394.87, stdev=2329.30 00:19:21.553 lat (usec): min=185, max=41466, avg=404.09, stdev=2329.72 00:19:21.553 clat percentiles (usec): 00:19:21.553 | 1.00th=[ 190], 5.00th=[ 198], 10.00th=[ 204], 20.00th=[ 212], 00:19:21.553 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 239], 00:19:21.553 | 70.00th=[ 260], 80.00th=[ 314], 90.00th=[ 375], 95.00th=[ 416], 00:19:21.553 | 99.00th=[ 594], 99.50th=[ 652], 99.90th=[41157], 99.95th=[41681], 00:19:21.553 | 99.99th=[41681] 00:19:21.553 write: IOPS=1748, BW=6993KiB/s (7161kB/s)(7000KiB/1001msec); 0 zone resets 00:19:21.553 slat (nsec): min=4080, max=51836, avg=9932.67, stdev=8343.50 00:19:21.553 clat (usec): min=102, max=425, avg=202.04, stdev=61.75 00:19:21.553 lat (usec): min=106, max=430, avg=211.98, stdev=65.20 00:19:21.553 clat percentiles (usec): 00:19:21.553 | 1.00th=[ 117], 5.00th=[ 123], 10.00th=[ 129], 20.00th=[ 141], 00:19:21.553 | 30.00th=[ 159], 40.00th=[ 174], 50.00th=[ 190], 60.00th=[ 221], 00:19:21.553 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 269], 95.00th=[ 330], 00:19:21.553 | 99.00th=[ 367], 99.50th=[ 375], 99.90th=[ 396], 99.95th=[ 424], 00:19:21.553 | 99.99th=[ 424] 00:19:21.553 bw ( KiB/s): min= 4096, max= 4096, per=18.70%, avg=4096.00, stdev= 0.00, samples=1 00:19:21.553 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:21.553 lat (usec) : 250=75.20%, 500=23.37%, 750=1.28% 00:19:21.553 lat (msec) : 50=0.15% 00:19:21.553 cpu : usr=2.30%, sys=4.10%, ctx=3287, majf=0, minf=1 00:19:21.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.553 issued rwts: total=1536,1750,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.553 job2: (groupid=0, jobs=1): err= 0: pid=2011143: Wed May 15 00:34:47 2024 00:19:21.553 read: IOPS=153, BW=614KiB/s (629kB/s)(632KiB/1029msec) 00:19:21.553 slat (nsec): min=3782, max=50140, avg=10629.36, stdev=12183.72 00:19:21.553 clat (usec): min=202, max=41420, avg=5754.10, stdev=13848.99 00:19:21.553 lat (usec): min=208, max=41454, avg=5764.73, stdev=13858.38 00:19:21.553 clat percentiles (usec): 00:19:21.553 | 1.00th=[ 204], 5.00th=[ 241], 10.00th=[ 249], 20.00th=[ 281], 00:19:21.553 | 30.00th=[ 367], 40.00th=[ 371], 50.00th=[ 375], 60.00th=[ 375], 00:19:21.553 | 70.00th=[ 383], 80.00th=[ 408], 90.00th=[41157], 95.00th=[41157], 00:19:21.553 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:21.553 | 99.99th=[41681] 00:19:21.553 write: IOPS=497, BW=1990KiB/s (2038kB/s)(2048KiB/1029msec); 0 zone resets 00:19:21.553 slat (nsec): min=3941, max=49836, avg=7139.15, stdev=2538.27 00:19:21.553 clat (usec): min=119, max=532, avg=220.20, stdev=40.08 00:19:21.553 lat (usec): min=125, max=582, avg=227.34, stdev=41.02 00:19:21.553 clat percentiles (usec): 00:19:21.553 | 1.00th=[ 130], 5.00th=[ 145], 10.00th=[ 167], 20.00th=[ 186], 00:19:21.553 | 30.00th=[ 202], 40.00th=[ 219], 50.00th=[ 237], 60.00th=[ 243], 00:19:21.553 | 70.00th=[ 245], 80.00th=[ 247], 90.00th=[ 251], 95.00th=[ 253], 00:19:21.553 | 99.00th=[ 281], 99.50th=[ 306], 99.90th=[ 537], 99.95th=[ 537], 00:19:21.553 | 99.99th=[ 537] 00:19:21.553 bw ( KiB/s): min= 4096, max= 4096, per=18.70%, avg=4096.00, stdev= 0.00, samples=1 00:19:21.553 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:21.553 lat (usec) : 250=70.75%, 500=25.52%, 750=0.60% 00:19:21.553 lat (msec) : 50=3.13% 00:19:21.553 cpu : usr=0.29%, sys=0.49%, ctx=672, majf=0, minf=1 00:19:21.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.553 issued rwts: total=158,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.553 job3: (groupid=0, jobs=1): err= 0: pid=2011151: Wed May 15 00:34:47 2024 00:19:21.553 read: IOPS=23, BW=93.2KiB/s (95.4kB/s)(96.0KiB/1030msec) 00:19:21.553 slat (nsec): min=5735, max=47331, avg=28615.04, stdev=10052.11 00:19:21.553 clat (usec): min=317, max=41297, avg=39267.65, stdev=8297.03 00:19:21.553 lat (usec): min=352, max=41303, avg=39296.27, stdev=8295.64 00:19:21.553 clat percentiles (usec): 00:19:21.553 | 1.00th=[ 318], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:19:21.553 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:21.553 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:21.553 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:21.553 | 99.99th=[41157] 00:19:21.553 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:19:21.553 slat (nsec): min=4515, max=39270, avg=6003.77, stdev=1858.63 00:19:21.553 clat (usec): min=110, max=261, avg=159.35, stdev=21.10 00:19:21.553 lat (usec): min=116, max=289, avg=165.35, stdev=21.59 00:19:21.553 clat percentiles (usec): 00:19:21.553 | 1.00th=[ 121], 5.00th=[ 129], 10.00th=[ 135], 20.00th=[ 143], 00:19:21.553 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 163], 00:19:21.553 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 186], 95.00th=[ 198], 00:19:21.553 | 99.00th=[ 229], 99.50th=[ 237], 99.90th=[ 262], 99.95th=[ 262], 00:19:21.553 | 99.99th=[ 262] 00:19:21.553 bw ( KiB/s): min= 4096, max= 4096, per=18.70%, avg=4096.00, stdev= 0.00, samples=1 00:19:21.553 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:21.553 lat (usec) : 250=95.15%, 500=0.56% 00:19:21.553 lat (msec) : 50=4.29% 00:19:21.553 cpu : usr=0.29%, sys=0.19%, ctx=539, majf=0, minf=1 00:19:21.553 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.553 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.553 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.553 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.553 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.553 00:19:21.553 Run status group 0 (all jobs): 00:19:21.553 READ: bw=16.2MiB/s (17.0MB/s), 93.2KiB/s-9.99MiB/s (95.4kB/s-10.5MB/s), io=16.7MiB (17.5MB), run=1001-1030msec 00:19:21.553 WRITE: bw=21.4MiB/s (22.4MB/s), 1988KiB/s-11.2MiB/s (2036kB/s-11.7MB/s), io=22.0MiB (23.1MB), run=1001-1030msec 00:19:21.553 00:19:21.553 Disk stats (read/write): 00:19:21.553 nvme0n1: ios=2138/2560, merge=0/0, ticks=539/345, in_queue=884, util=89.78% 00:19:21.553 nvme0n2: ios=1037/1454, merge=0/0, ticks=467/291, in_queue=758, util=85.63% 00:19:21.553 nvme0n3: ios=209/512, merge=0/0, ticks=1191/105, in_queue=1296, util=96.94% 00:19:21.554 nvme0n4: ios=49/512, merge=0/0, ticks=1240/78, in_queue=1318, util=97.97% 00:19:21.554 00:34:47 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:21.554 [global] 00:19:21.554 thread=1 00:19:21.554 invalidate=1 00:19:21.554 rw=write 00:19:21.554 time_based=1 00:19:21.554 runtime=1 00:19:21.554 ioengine=libaio 00:19:21.554 direct=1 00:19:21.554 bs=4096 00:19:21.554 iodepth=128 00:19:21.554 norandommap=0 00:19:21.554 numjobs=1 00:19:21.554 00:19:21.554 verify_dump=1 00:19:21.554 verify_backlog=512 00:19:21.554 verify_state_save=0 00:19:21.554 do_verify=1 00:19:21.554 verify=crc32c-intel 00:19:21.554 [job0] 00:19:21.554 filename=/dev/nvme0n1 00:19:21.554 [job1] 00:19:21.554 filename=/dev/nvme0n2 00:19:21.554 [job2] 00:19:21.554 filename=/dev/nvme0n3 00:19:21.554 [job3] 00:19:21.554 filename=/dev/nvme0n4 00:19:21.554 Could not set queue depth (nvme0n1) 00:19:21.554 Could not set queue depth (nvme0n2) 00:19:21.554 Could not set queue depth (nvme0n3) 00:19:21.554 Could not set queue depth (nvme0n4) 00:19:21.813 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:21.813 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:21.813 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:21.813 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:21.813 fio-3.35 00:19:21.813 Starting 4 threads 00:19:23.195 00:19:23.195 job0: (groupid=0, jobs=1): err= 0: pid=2011707: Wed May 15 00:34:49 2024 00:19:23.195 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:19:23.195 slat (nsec): min=813, max=10219k, avg=80237.15, stdev=559204.60 00:19:23.195 clat (usec): min=2825, max=31960, avg=9317.61, stdev=3489.51 00:19:23.195 lat (usec): min=2828, max=31964, avg=9397.85, stdev=3533.41 00:19:23.195 clat percentiles (usec): 00:19:23.195 | 1.00th=[ 3916], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 7046], 00:19:23.195 | 30.00th=[ 7504], 40.00th=[ 7898], 50.00th=[ 8291], 60.00th=[ 8979], 00:19:23.195 | 70.00th=[ 9634], 80.00th=[11338], 90.00th=[13304], 95.00th=[16057], 00:19:23.195 | 99.00th=[22676], 99.50th=[25297], 99.90th=[29492], 99.95th=[31851], 00:19:23.195 | 99.99th=[31851] 00:19:23.195 write: IOPS=6410, BW=25.0MiB/s (26.3MB/s)(25.2MiB/1008msec); 0 zone resets 00:19:23.195 slat (nsec): min=1493, max=9635.4k, avg=75879.98, stdev=368315.26 00:19:23.195 clat (usec): min=1624, max=31951, avg=10925.38, stdev=5313.42 00:19:23.195 lat (usec): min=1627, max=31953, avg=11001.26, stdev=5355.87 00:19:23.195 clat percentiles (usec): 00:19:23.195 | 1.00th=[ 2540], 5.00th=[ 4490], 10.00th=[ 5997], 20.00th=[ 7439], 00:19:23.195 | 30.00th=[ 7635], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 9634], 00:19:23.195 | 70.00th=[14353], 80.00th=[15401], 90.00th=[18482], 95.00th=[21890], 00:19:23.195 | 99.00th=[25560], 99.50th=[26870], 99.90th=[27132], 99.95th=[27132], 00:19:23.195 | 99.99th=[31851] 00:19:23.195 bw ( KiB/s): min=20480, max=30200, per=37.02%, avg=25340.00, stdev=6873.08, samples=2 00:19:23.195 iops : min= 5120, max= 7550, avg=6335.00, stdev=1718.27, samples=2 00:19:23.196 lat (msec) : 2=0.14%, 4=2.10%, 10=63.76%, 20=29.00%, 50=5.00% 00:19:23.196 cpu : usr=1.79%, sys=3.77%, ctx=829, majf=0, minf=1 00:19:23.196 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:23.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.196 issued rwts: total=6144,6462,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.196 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.196 job1: (groupid=0, jobs=1): err= 0: pid=2011726: Wed May 15 00:34:49 2024 00:19:23.196 read: IOPS=3008, BW=11.8MiB/s (12.3MB/s)(12.0MiB/1021msec) 00:19:23.196 slat (nsec): min=825, max=21840k, avg=142865.54, stdev=1064024.72 00:19:23.196 clat (usec): min=8024, max=68316, avg=17288.12, stdev=9992.57 00:19:23.196 lat (usec): min=8030, max=68331, avg=17430.99, stdev=10089.01 00:19:23.196 clat percentiles (usec): 00:19:23.196 | 1.00th=[ 9110], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10945], 00:19:23.196 | 30.00th=[11469], 40.00th=[11994], 50.00th=[13173], 60.00th=[15533], 00:19:23.196 | 70.00th=[16909], 80.00th=[20841], 90.00th=[27395], 95.00th=[46400], 00:19:23.196 | 99.00th=[51643], 99.50th=[54789], 99.90th=[56361], 99.95th=[62653], 00:19:23.196 | 99.99th=[68682] 00:19:23.196 write: IOPS=3398, BW=13.3MiB/s (13.9MB/s)(13.6MiB/1021msec); 0 zone resets 00:19:23.196 slat (nsec): min=1491, max=10728k, avg=154138.88, stdev=707319.47 00:19:23.196 clat (usec): min=2993, max=56546, avg=21693.25, stdev=10314.93 00:19:23.196 lat (usec): min=3000, max=56549, avg=21847.39, stdev=10360.20 00:19:23.196 clat percentiles (usec): 00:19:23.196 | 1.00th=[ 7701], 5.00th=[11076], 10.00th=[13435], 20.00th=[14353], 00:19:23.196 | 30.00th=[14877], 40.00th=[15270], 50.00th=[16057], 60.00th=[21627], 00:19:23.196 | 70.00th=[23462], 80.00th=[29492], 90.00th=[39584], 95.00th=[43254], 00:19:23.196 | 99.00th=[51119], 99.50th=[52167], 99.90th=[54789], 99.95th=[56361], 00:19:23.196 | 99.99th=[56361] 00:19:23.196 bw ( KiB/s): min=11040, max=15696, per=19.53%, avg=13368.00, stdev=3292.29, samples=2 00:19:23.196 iops : min= 2760, max= 3924, avg=3342.00, stdev=823.07, samples=2 00:19:23.196 lat (msec) : 4=0.09%, 10=3.99%, 20=60.41%, 50=31.96%, 100=3.55% 00:19:23.196 cpu : usr=1.37%, sys=2.94%, ctx=453, majf=0, minf=1 00:19:23.196 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:19:23.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.196 issued rwts: total=3072,3470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.196 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.196 job2: (groupid=0, jobs=1): err= 0: pid=2011754: Wed May 15 00:34:49 2024 00:19:23.196 read: IOPS=2390, BW=9560KiB/s (9790kB/s)(9656KiB/1010msec) 00:19:23.196 slat (nsec): min=1027, max=14608k, avg=118639.20, stdev=820888.10 00:19:23.196 clat (usec): min=4146, max=42382, avg=14210.48, stdev=7333.71 00:19:23.196 lat (usec): min=4149, max=42390, avg=14329.11, stdev=7395.13 00:19:23.196 clat percentiles (usec): 00:19:23.196 | 1.00th=[ 4883], 5.00th=[ 7177], 10.00th=[ 8225], 20.00th=[ 8586], 00:19:23.196 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10945], 60.00th=[13960], 00:19:23.196 | 70.00th=[15926], 80.00th=[18482], 90.00th=[23462], 95.00th=[28967], 00:19:23.196 | 99.00th=[39584], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:19:23.196 | 99.99th=[42206] 00:19:23.196 write: IOPS=2534, BW=9.90MiB/s (10.4MB/s)(10.0MiB/1010msec); 0 zone resets 00:19:23.196 slat (nsec): min=1665, max=40864k, avg=275192.15, stdev=1896094.65 00:19:23.196 clat (msec): min=2, max=139, avg=33.35, stdev=28.14 00:19:23.196 lat (msec): min=2, max=139, avg=33.62, stdev=28.32 00:19:23.196 clat percentiles (msec): 00:19:23.196 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 15], 00:19:23.196 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 23], 60.00th=[ 25], 00:19:23.196 | 70.00th=[ 42], 80.00th=[ 52], 90.00th=[ 77], 95.00th=[ 94], 00:19:23.196 | 99.00th=[ 124], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:19:23.196 | 99.99th=[ 140] 00:19:23.196 bw ( KiB/s): min= 9328, max=11152, per=14.96%, avg=10240.00, stdev=1289.76, samples=2 00:19:23.196 iops : min= 2332, max= 2788, avg=2560.00, stdev=322.44, samples=2 00:19:23.196 lat (msec) : 4=0.54%, 10=24.99%, 20=38.46%, 50=25.45%, 100=8.04% 00:19:23.196 lat (msec) : 250=2.51% 00:19:23.196 cpu : usr=1.29%, sys=1.49%, ctx=332, majf=0, minf=1 00:19:23.196 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:19:23.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.196 issued rwts: total=2414,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.196 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.196 job3: (groupid=0, jobs=1): err= 0: pid=2011765: Wed May 15 00:34:49 2024 00:19:23.196 read: IOPS=4513, BW=17.6MiB/s (18.5MB/s)(18.0MiB/1021msec) 00:19:23.196 slat (nsec): min=1342, max=15677k, avg=103647.05, stdev=796394.04 00:19:23.196 clat (usec): min=2722, max=40629, avg=12336.99, stdev=5323.75 00:19:23.196 lat (usec): min=2725, max=40633, avg=12440.64, stdev=5391.10 00:19:23.196 clat percentiles (usec): 00:19:23.196 | 1.00th=[ 5407], 5.00th=[ 7308], 10.00th=[ 7701], 20.00th=[ 8356], 00:19:23.196 | 30.00th=[ 8717], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[12518], 00:19:23.196 | 70.00th=[14091], 80.00th=[16188], 90.00th=[18744], 95.00th=[22938], 00:19:23.196 | 99.00th=[32375], 99.50th=[36439], 99.90th=[40633], 99.95th=[40633], 00:19:23.196 | 99.99th=[40633] 00:19:23.196 write: IOPS=4879, BW=19.1MiB/s (20.0MB/s)(19.5MiB/1021msec); 0 zone resets 00:19:23.196 slat (usec): min=2, max=11142, avg=102.43, stdev=605.85 00:19:23.196 clat (usec): min=732, max=54968, avg=14532.14, stdev=10167.85 00:19:23.196 lat (usec): min=736, max=54972, avg=14634.57, stdev=10231.21 00:19:23.196 clat percentiles (usec): 00:19:23.196 | 1.00th=[ 2966], 5.00th=[ 5407], 10.00th=[ 6849], 20.00th=[ 7767], 00:19:23.196 | 30.00th=[ 8356], 40.00th=[ 8717], 50.00th=[11207], 60.00th=[14091], 00:19:23.196 | 70.00th=[15926], 80.00th=[17957], 90.00th=[24773], 95.00th=[42206], 00:19:23.196 | 99.00th=[51119], 99.50th=[51119], 99.90th=[54789], 99.95th=[54789], 00:19:23.196 | 99.99th=[54789] 00:19:23.196 bw ( KiB/s): min=14256, max=24576, per=28.36%, avg=19416.00, stdev=7297.34, samples=2 00:19:23.196 iops : min= 3564, max= 6144, avg=4854.00, stdev=1824.34, samples=2 00:19:23.196 lat (usec) : 750=0.07%, 1000=0.01% 00:19:23.196 lat (msec) : 2=0.08%, 4=1.04%, 10=46.82%, 20=38.73%, 50=12.35% 00:19:23.196 lat (msec) : 100=0.90% 00:19:23.196 cpu : usr=1.18%, sys=3.24%, ctx=499, majf=0, minf=1 00:19:23.196 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:19:23.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:23.196 issued rwts: total=4608,4982,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.196 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:23.196 00:19:23.196 Run status group 0 (all jobs): 00:19:23.196 READ: bw=62.1MiB/s (65.1MB/s), 9560KiB/s-23.8MiB/s (9790kB/s-25.0MB/s), io=63.4MiB (66.5MB), run=1008-1021msec 00:19:23.196 WRITE: bw=66.9MiB/s (70.1MB/s), 9.90MiB/s-25.0MiB/s (10.4MB/s-26.3MB/s), io=68.3MiB (71.6MB), run=1008-1021msec 00:19:23.196 00:19:23.196 Disk stats (read/write): 00:19:23.196 nvme0n1: ios=4658/4743, merge=0/0, ticks=43971/57073, in_queue=101044, util=83.17% 00:19:23.196 nvme0n2: ios=2560/2815, merge=0/0, ticks=21725/22912, in_queue=44637, util=84.07% 00:19:23.196 nvme0n3: ios=1696/2048, merge=0/0, ticks=25715/55311, in_queue=81026, util=97.63% 00:19:23.196 nvme0n4: ios=4154/4335, merge=0/0, ticks=47682/51889, in_queue=99571, util=99.02% 00:19:23.196 00:34:49 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:23.196 [global] 00:19:23.196 thread=1 00:19:23.196 invalidate=1 00:19:23.196 rw=randwrite 00:19:23.196 time_based=1 00:19:23.196 runtime=1 00:19:23.196 ioengine=libaio 00:19:23.196 direct=1 00:19:23.196 bs=4096 00:19:23.196 iodepth=128 00:19:23.196 norandommap=0 00:19:23.196 numjobs=1 00:19:23.196 00:19:23.196 verify_dump=1 00:19:23.196 verify_backlog=512 00:19:23.196 verify_state_save=0 00:19:23.196 do_verify=1 00:19:23.196 verify=crc32c-intel 00:19:23.196 [job0] 00:19:23.196 filename=/dev/nvme0n1 00:19:23.196 [job1] 00:19:23.196 filename=/dev/nvme0n2 00:19:23.196 [job2] 00:19:23.196 filename=/dev/nvme0n3 00:19:23.196 [job3] 00:19:23.196 filename=/dev/nvme0n4 00:19:23.196 Could not set queue depth (nvme0n1) 00:19:23.196 Could not set queue depth (nvme0n2) 00:19:23.196 Could not set queue depth (nvme0n3) 00:19:23.196 Could not set queue depth (nvme0n4) 00:19:23.459 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:23.459 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:23.459 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:23.459 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:23.459 fio-3.35 00:19:23.459 Starting 4 threads 00:19:24.848 00:19:24.848 job0: (groupid=0, jobs=1): err= 0: pid=2012296: Wed May 15 00:34:50 2024 00:19:24.848 read: IOPS=4049, BW=15.8MiB/s (16.6MB/s)(15.9MiB/1006msec) 00:19:24.848 slat (nsec): min=888, max=15730k, avg=113469.69, stdev=882393.10 00:19:24.848 clat (usec): min=2816, max=88226, avg=13272.45, stdev=10734.91 00:19:24.848 lat (usec): min=2818, max=88232, avg=13385.92, stdev=10844.24 00:19:24.848 clat percentiles (usec): 00:19:24.848 | 1.00th=[ 5080], 5.00th=[ 6259], 10.00th=[ 6980], 20.00th=[ 7373], 00:19:24.848 | 30.00th=[ 7963], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[10159], 00:19:24.848 | 70.00th=[12518], 80.00th=[19268], 90.00th=[22676], 95.00th=[31327], 00:19:24.848 | 99.00th=[67634], 99.50th=[81265], 99.90th=[88605], 99.95th=[88605], 00:19:24.848 | 99.99th=[88605] 00:19:24.848 write: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec); 0 zone resets 00:19:24.848 slat (nsec): min=1450, max=27753k, avg=115461.89, stdev=827477.65 00:19:24.848 clat (usec): min=1741, max=110062, avg=17924.22, stdev=21614.91 00:19:24.848 lat (usec): min=1744, max=110069, avg=18039.68, stdev=21756.60 00:19:24.848 clat percentiles (msec): 00:19:24.848 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 6], 20.00th=[ 7], 00:19:24.848 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 10], 00:19:24.848 | 70.00th=[ 14], 80.00th=[ 24], 90.00th=[ 50], 95.00th=[ 66], 00:19:24.848 | 99.00th=[ 107], 99.50th=[ 107], 99.90th=[ 110], 99.95th=[ 110], 00:19:24.848 | 99.99th=[ 110] 00:19:24.848 bw ( KiB/s): min= 8080, max=24688, per=25.04%, avg=16384.00, stdev=11743.63, samples=2 00:19:24.848 iops : min= 2020, max= 6172, avg=4096.00, stdev=2935.91, samples=2 00:19:24.848 lat (msec) : 2=0.07%, 4=1.90%, 10=59.05%, 20=17.50%, 50=16.78% 00:19:24.848 lat (msec) : 100=3.61%, 250=1.09% 00:19:24.848 cpu : usr=1.69%, sys=2.49%, ctx=405, majf=0, minf=1 00:19:24.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:24.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.848 issued rwts: total=4074,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.848 job1: (groupid=0, jobs=1): err= 0: pid=2012308: Wed May 15 00:34:50 2024 00:19:24.848 read: IOPS=3183, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1005msec) 00:19:24.848 slat (nsec): min=912, max=19696k, avg=161017.54, stdev=1149527.91 00:19:24.848 clat (msec): min=2, max=101, avg=17.53, stdev=12.92 00:19:24.848 lat (msec): min=5, max=101, avg=17.69, stdev=13.05 00:19:24.848 clat percentiles (msec): 00:19:24.848 | 1.00th=[ 8], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 10], 00:19:24.848 | 30.00th=[ 11], 40.00th=[ 13], 50.00th=[ 15], 60.00th=[ 17], 00:19:24.848 | 70.00th=[ 18], 80.00th=[ 21], 90.00th=[ 27], 95.00th=[ 44], 00:19:24.848 | 99.00th=[ 84], 99.50th=[ 92], 99.90th=[ 102], 99.95th=[ 102], 00:19:24.848 | 99.99th=[ 102] 00:19:24.848 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:19:24.848 slat (nsec): min=1634, max=13942k, avg=132205.81, stdev=810679.66 00:19:24.848 clat (msec): min=2, max=100, avg=19.92, stdev=15.91 00:19:24.848 lat (msec): min=2, max=100, avg=20.05, stdev=15.99 00:19:24.848 clat percentiles (msec): 00:19:24.848 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:19:24.848 | 30.00th=[ 11], 40.00th=[ 12], 50.00th=[ 15], 60.00th=[ 16], 00:19:24.848 | 70.00th=[ 23], 80.00th=[ 29], 90.00th=[ 44], 95.00th=[ 52], 00:19:24.848 | 99.00th=[ 78], 99.50th=[ 85], 99.90th=[ 88], 99.95th=[ 102], 00:19:24.848 | 99.99th=[ 102] 00:19:24.848 bw ( KiB/s): min=13616, max=15048, per=21.90%, avg=14332.00, stdev=1012.58, samples=2 00:19:24.848 iops : min= 3404, max= 3762, avg=3583.00, stdev=253.14, samples=2 00:19:24.848 lat (msec) : 4=0.19%, 10=27.79%, 20=43.96%, 50=23.15%, 100=4.81% 00:19:24.848 lat (msec) : 250=0.10% 00:19:24.848 cpu : usr=1.29%, sys=3.39%, ctx=288, majf=0, minf=1 00:19:24.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:24.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.848 issued rwts: total=3199,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.848 job2: (groupid=0, jobs=1): err= 0: pid=2012328: Wed May 15 00:34:50 2024 00:19:24.848 read: IOPS=2529, BW=9.88MiB/s (10.4MB/s)(10.0MiB/1012msec) 00:19:24.848 slat (nsec): min=984, max=18250k, avg=133723.53, stdev=938509.14 00:19:24.848 clat (usec): min=4257, max=48536, avg=15932.80, stdev=7094.07 00:19:24.848 lat (usec): min=4261, max=48539, avg=16066.52, stdev=7166.18 00:19:24.848 clat percentiles (usec): 00:19:24.848 | 1.00th=[ 8356], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[10028], 00:19:24.848 | 30.00th=[10814], 40.00th=[13960], 50.00th=[15664], 60.00th=[16712], 00:19:24.848 | 70.00th=[17433], 80.00th=[18482], 90.00th=[22938], 95.00th=[29492], 00:19:24.848 | 99.00th=[45351], 99.50th=[47449], 99.90th=[48497], 99.95th=[48497], 00:19:24.848 | 99.99th=[48497] 00:19:24.848 write: IOPS=2828, BW=11.0MiB/s (11.6MB/s)(11.2MiB/1012msec); 0 zone resets 00:19:24.848 slat (nsec): min=1666, max=12520k, avg=226755.70, stdev=1144881.76 00:19:24.848 clat (msec): min=2, max=120, avg=30.58, stdev=25.83 00:19:24.848 lat (msec): min=2, max=120, avg=30.80, stdev=25.99 00:19:24.848 clat percentiles (msec): 00:19:24.848 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 11], 00:19:24.848 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 19], 60.00th=[ 24], 00:19:24.848 | 70.00th=[ 36], 80.00th=[ 47], 90.00th=[ 77], 95.00th=[ 89], 00:19:24.848 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 121], 99.95th=[ 121], 00:19:24.848 | 99.99th=[ 121] 00:19:24.848 bw ( KiB/s): min=10480, max=11392, per=16.71%, avg=10936.00, stdev=644.88, samples=2 00:19:24.848 iops : min= 2620, max= 2848, avg=2734.00, stdev=161.22, samples=2 00:19:24.848 lat (msec) : 4=0.11%, 10=19.22%, 20=47.99%, 50=23.72%, 100=7.95% 00:19:24.848 lat (msec) : 250=1.01% 00:19:24.848 cpu : usr=1.19%, sys=2.97%, ctx=301, majf=0, minf=1 00:19:24.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:24.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.848 issued rwts: total=2560,2862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.848 job3: (groupid=0, jobs=1): err= 0: pid=2012335: Wed May 15 00:34:50 2024 00:19:24.848 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:19:24.848 slat (nsec): min=885, max=45973k, avg=82642.09, stdev=751503.02 00:19:24.848 clat (usec): min=5403, max=64574, avg=9380.11, stdev=4145.96 00:19:24.848 lat (usec): min=5405, max=64579, avg=9462.76, stdev=4227.60 00:19:24.848 clat percentiles (usec): 00:19:24.848 | 1.00th=[ 5997], 5.00th=[ 6652], 10.00th=[ 7570], 20.00th=[ 8160], 00:19:24.848 | 30.00th=[ 8291], 40.00th=[ 8356], 50.00th=[ 8455], 60.00th=[ 8979], 00:19:24.848 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11076], 95.00th=[11994], 00:19:24.848 | 99.00th=[18744], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:19:24.848 | 99.99th=[64750] 00:19:24.848 write: IOPS=5985, BW=23.4MiB/s (24.5MB/s)(23.5MiB/1005msec); 0 zone resets 00:19:24.848 slat (nsec): min=1485, max=29370k, avg=87011.87, stdev=743628.96 00:19:24.848 clat (usec): min=3659, max=87147, avg=11858.76, stdev=12974.02 00:19:24.848 lat (usec): min=3662, max=87153, avg=11945.77, stdev=13045.11 00:19:24.848 clat percentiles (usec): 00:19:24.848 | 1.00th=[ 4948], 5.00th=[ 7111], 10.00th=[ 7898], 20.00th=[ 8160], 00:19:24.848 | 30.00th=[ 8291], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 8848], 00:19:24.848 | 70.00th=[ 8979], 80.00th=[ 9503], 90.00th=[12256], 95.00th=[30802], 00:19:24.848 | 99.00th=[85459], 99.50th=[87557], 99.90th=[87557], 99.95th=[87557], 00:19:24.848 | 99.99th=[87557] 00:19:24.848 bw ( KiB/s): min=16416, max=30688, per=35.99%, avg=23552.00, stdev=10091.83, samples=2 00:19:24.848 iops : min= 4104, max= 7672, avg=5888.00, stdev=2522.96, samples=2 00:19:24.848 lat (msec) : 4=0.06%, 10=79.94%, 20=16.78%, 50=0.76%, 100=2.46% 00:19:24.848 cpu : usr=1.39%, sys=2.79%, ctx=739, majf=0, minf=1 00:19:24.848 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:24.848 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.848 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.848 issued rwts: total=5632,6015,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.848 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.848 00:19:24.848 Run status group 0 (all jobs): 00:19:24.848 READ: bw=59.7MiB/s (62.6MB/s), 9.88MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=60.4MiB (63.3MB), run=1005-1012msec 00:19:24.848 WRITE: bw=63.9MiB/s (67.0MB/s), 11.0MiB/s-23.4MiB/s (11.6MB/s-24.5MB/s), io=64.7MiB (67.8MB), run=1005-1012msec 00:19:24.848 00:19:24.848 Disk stats (read/write): 00:19:24.848 nvme0n1: ios=3634/3967, merge=0/0, ticks=35174/54541, in_queue=89715, util=86.87% 00:19:24.848 nvme0n2: ios=2311/2560, merge=0/0, ticks=45001/61008, in_queue=106009, util=98.38% 00:19:24.848 nvme0n3: ios=2574/2560, merge=0/0, ticks=40448/66910, in_queue=107358, util=99.90% 00:19:24.848 nvme0n4: ios=4642/4740, merge=0/0, ticks=24154/23029, in_queue=47183, util=97.91% 00:19:24.848 00:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:24.848 00:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2012415 00:19:24.848 00:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:24.848 00:34:50 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:24.848 [global] 00:19:24.848 thread=1 00:19:24.848 invalidate=1 00:19:24.848 rw=read 00:19:24.848 time_based=1 00:19:24.848 runtime=10 00:19:24.848 ioengine=libaio 00:19:24.849 direct=1 00:19:24.849 bs=4096 00:19:24.849 iodepth=1 00:19:24.849 norandommap=1 00:19:24.849 numjobs=1 00:19:24.849 00:19:24.849 [job0] 00:19:24.849 filename=/dev/nvme0n1 00:19:24.849 [job1] 00:19:24.849 filename=/dev/nvme0n2 00:19:24.849 [job2] 00:19:24.849 filename=/dev/nvme0n3 00:19:24.849 [job3] 00:19:24.849 filename=/dev/nvme0n4 00:19:24.849 Could not set queue depth (nvme0n1) 00:19:24.849 Could not set queue depth (nvme0n2) 00:19:24.849 Could not set queue depth (nvme0n3) 00:19:24.849 Could not set queue depth (nvme0n4) 00:19:25.107 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.107 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.107 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.107 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:25.107 fio-3.35 00:19:25.107 Starting 4 threads 00:19:27.636 00:34:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:27.894 00:34:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:27.894 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=46501888, buflen=4096 00:19:27.894 fio: pid=2012823, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:27.894 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:27.894 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:27.894 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=42876928, buflen=4096 00:19:27.894 fio: pid=2012822, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:28.153 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=26759168, buflen=4096 00:19:28.153 fio: pid=2012820, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:28.153 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:28.153 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:28.153 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=46465024, buflen=4096 00:19:28.153 fio: pid=2012821, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:28.153 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:28.153 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:28.411 00:19:28.411 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2012820: Wed May 15 00:34:54 2024 00:19:28.411 read: IOPS=2243, BW=8974KiB/s (9189kB/s)(25.5MiB/2912msec) 00:19:28.411 slat (usec): min=2, max=32086, avg=10.74, stdev=410.05 00:19:28.411 clat (usec): min=165, max=41217, avg=430.35, stdev=2611.40 00:19:28.411 lat (usec): min=171, max=41250, avg=441.08, stdev=2644.83 00:19:28.411 clat percentiles (usec): 00:19:28.411 | 1.00th=[ 200], 5.00th=[ 219], 10.00th=[ 233], 20.00th=[ 243], 00:19:28.411 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:19:28.411 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:19:28.411 | 99.00th=[ 347], 99.50th=[ 486], 99.90th=[41157], 99.95th=[41157], 00:19:28.411 | 99.99th=[41157] 00:19:28.411 bw ( KiB/s): min= 96, max=15128, per=16.20%, avg=8387.20, stdev=7693.09, samples=5 00:19:28.411 iops : min= 24, max= 3782, avg=2096.80, stdev=1923.27, samples=5 00:19:28.411 lat (usec) : 250=29.19%, 500=70.31%, 750=0.08% 00:19:28.411 lat (msec) : 50=0.41% 00:19:28.411 cpu : usr=0.38%, sys=1.89%, ctx=6537, majf=0, minf=1 00:19:28.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.411 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.411 issued rwts: total=6534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:28.411 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2012821: Wed May 15 00:34:54 2024 00:19:28.411 read: IOPS=3699, BW=14.4MiB/s (15.1MB/s)(44.3MiB/3067msec) 00:19:28.411 slat (usec): min=2, max=16979, avg=11.86, stdev=302.06 00:19:28.411 clat (usec): min=150, max=3783, avg=255.49, stdev=46.30 00:19:28.411 lat (usec): min=156, max=17489, avg=267.35, stdev=309.60 00:19:28.411 clat percentiles (usec): 00:19:28.411 | 1.00th=[ 180], 5.00th=[ 198], 10.00th=[ 212], 20.00th=[ 229], 00:19:28.411 | 30.00th=[ 243], 40.00th=[ 251], 50.00th=[ 260], 60.00th=[ 265], 00:19:28.411 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 289], 95.00th=[ 302], 00:19:28.411 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 457], 99.95th=[ 486], 00:19:28.411 | 99.99th=[ 865] 00:19:28.411 bw ( KiB/s): min=14256, max=16664, per=29.57%, avg=15310.40, stdev=924.78, samples=5 00:19:28.411 iops : min= 3564, max= 4166, avg=3827.60, stdev=231.19, samples=5 00:19:28.411 lat (usec) : 250=38.47%, 500=61.49%, 750=0.02%, 1000=0.01% 00:19:28.411 lat (msec) : 4=0.01% 00:19:28.411 cpu : usr=0.82%, sys=3.10%, ctx=11350, majf=0, minf=1 00:19:28.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.411 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.411 issued rwts: total=11345,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:28.411 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2012822: Wed May 15 00:34:54 2024 00:19:28.411 read: IOPS=3760, BW=14.7MiB/s (15.4MB/s)(40.9MiB/2784msec) 00:19:28.411 slat (nsec): min=1962, max=38210, avg=5068.12, stdev=1690.30 00:19:28.411 clat (usec): min=159, max=929, avg=257.59, stdev=34.95 00:19:28.411 lat (usec): min=164, max=935, avg=262.66, stdev=35.17 00:19:28.411 clat percentiles (usec): 00:19:28.411 | 1.00th=[ 194], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 231], 00:19:28.411 | 30.00th=[ 241], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:19:28.411 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:19:28.411 | 99.00th=[ 363], 99.50th=[ 400], 99.90th=[ 482], 99.95th=[ 734], 00:19:28.411 | 99.99th=[ 824] 00:19:28.411 bw ( KiB/s): min=14400, max=16024, per=29.30%, avg=15168.00, stdev=636.94, samples=5 00:19:28.411 iops : min= 3600, max= 4006, avg=3792.00, stdev=159.24, samples=5 00:19:28.411 lat (usec) : 250=41.01%, 500=58.89%, 750=0.05%, 1000=0.05% 00:19:28.411 cpu : usr=0.79%, sys=3.41%, ctx=10470, majf=0, minf=1 00:19:28.411 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.411 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.411 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.411 issued rwts: total=10469,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.411 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:28.411 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2012823: Wed May 15 00:34:54 2024 00:19:28.411 read: IOPS=4305, BW=16.8MiB/s (17.6MB/s)(44.3MiB/2637msec) 00:19:28.411 slat (usec): min=2, max=112, avg= 5.77, stdev= 1.34 00:19:28.411 clat (usec): min=157, max=645, avg=223.04, stdev=21.45 00:19:28.411 lat (usec): min=163, max=698, avg=228.81, stdev=21.65 00:19:28.411 clat percentiles (usec): 00:19:28.411 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 200], 20.00th=[ 206], 00:19:28.411 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 221], 60.00th=[ 227], 00:19:28.411 | 70.00th=[ 231], 80.00th=[ 239], 90.00th=[ 249], 95.00th=[ 260], 00:19:28.411 | 99.00th=[ 281], 99.50th=[ 289], 99.90th=[ 318], 99.95th=[ 388], 00:19:28.411 | 99.99th=[ 603] 00:19:28.411 bw ( KiB/s): min=17232, max=17744, per=33.67%, avg=17433.60, stdev=227.92, samples=5 00:19:28.411 iops : min= 4308, max= 4436, avg=4358.40, stdev=56.98, samples=5 00:19:28.411 lat (usec) : 250=90.98%, 500=8.97%, 750=0.04% 00:19:28.411 cpu : usr=0.76%, sys=4.86%, ctx=11355, majf=0, minf=2 00:19:28.412 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:28.412 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.412 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:28.412 issued rwts: total=11354,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:28.412 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:28.412 00:19:28.412 Run status group 0 (all jobs): 00:19:28.412 READ: bw=50.6MiB/s (53.0MB/s), 8974KiB/s-16.8MiB/s (9189kB/s-17.6MB/s), io=155MiB (163MB), run=2637-3067msec 00:19:28.412 00:19:28.412 Disk stats (read/write): 00:19:28.412 nvme0n1: ios=6440/0, merge=0/0, ticks=3573/0, in_queue=3573, util=98.57% 00:19:28.412 nvme0n2: ios=10784/0, merge=0/0, ticks=2704/0, in_queue=2704, util=94.87% 00:19:28.412 nvme0n3: ios=9940/0, merge=0/0, ticks=3394/0, in_queue=3394, util=100.00% 00:19:28.412 nvme0n4: ios=11353/0, merge=0/0, ticks=2460/0, in_queue=2460, util=96.46% 00:19:28.412 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:28.412 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:28.670 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:28.670 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:28.670 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:28.670 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:28.929 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:28.929 00:34:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:28.929 00:34:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:28.929 00:34:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 2012415 00:19:28.929 00:34:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:28.929 00:34:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:29.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1216 -- # local i=0 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # lsblk -o NAME,SERIAL 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1217 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -l -o NAME,SERIAL 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1228 -- # return 0 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:29.496 nvmf hotplug test: fio failed as expected 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:29.496 00:34:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:29.496 rmmod nvme_tcp 00:19:29.755 rmmod nvme_fabrics 00:19:29.755 rmmod nvme_keyring 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2009083 ']' 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2009083 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@947 -- # '[' -z 2009083 ']' 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@951 -- # kill -0 2009083 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # uname 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2009083 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2009083' 00:19:29.755 killing process with pid 2009083 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # kill 2009083 00:19:29.755 [2024-05-15 00:34:55.742949] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:29.755 00:34:55 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@971 -- # wait 2009083 00:19:30.323 00:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:30.323 00:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:30.323 00:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:30.323 00:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:30.323 00:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:30.323 00:34:56 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.323 00:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:30.323 00:34:56 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.225 00:34:58 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:32.225 00:19:32.225 real 0m26.570s 00:19:32.225 user 2m49.211s 00:19:32.225 sys 0m7.501s 00:19:32.225 00:34:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:32.225 00:34:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.225 ************************************ 00:19:32.225 END TEST nvmf_fio_target 00:19:32.225 ************************************ 00:19:32.225 00:34:58 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:32.225 00:34:58 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:32.225 00:34:58 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:32.225 00:34:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:32.225 ************************************ 00:19:32.225 START TEST nvmf_bdevio 00:19:32.225 ************************************ 00:19:32.225 00:34:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:32.484 * Looking for test storage... 00:19:32.484 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:32.484 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:32.485 00:34:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:39.054 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:39.054 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:39.054 Found net devices under 0000:27:00.0: cvl_0_0 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:39.054 Found net devices under 0000:27:00.1: cvl_0_1 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:39.054 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:39.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:39.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:19:39.055 00:19:39.055 --- 10.0.0.2 ping statistics --- 00:19:39.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.055 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:39.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:39.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.062 ms 00:19:39.055 00:19:39.055 --- 10.0.0.1 ping statistics --- 00:19:39.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:39.055 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2017872 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2017872 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@828 -- # '[' -z 2017872 ']' 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:39.055 00:35:04 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.055 [2024-05-15 00:35:04.456013] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:39.055 [2024-05-15 00:35:04.456123] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:39.055 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.055 [2024-05-15 00:35:04.580323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:39.055 [2024-05-15 00:35:04.680291] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:39.055 [2024-05-15 00:35:04.680332] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:39.055 [2024-05-15 00:35:04.680348] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:39.055 [2024-05-15 00:35:04.680358] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:39.055 [2024-05-15 00:35:04.680367] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:39.055 [2024-05-15 00:35:04.680589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:39.055 [2024-05-15 00:35:04.680706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:39.055 [2024-05-15 00:35:04.680807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:39.055 [2024-05-15 00:35:04.680835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:39.055 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:39.055 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@861 -- # return 0 00:19:39.055 00:35:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.055 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:39.055 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.055 00:35:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.055 00:35:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:39.055 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.055 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.055 [2024-05-15 00:35:05.213093] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.315 Malloc0 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:39.315 [2024-05-15 00:35:05.276472] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:19:39.315 [2024-05-15 00:35:05.276853] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:39.315 00:35:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:39.315 { 00:19:39.316 "params": { 00:19:39.316 "name": "Nvme$subsystem", 00:19:39.316 "trtype": "$TEST_TRANSPORT", 00:19:39.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:39.316 "adrfam": "ipv4", 00:19:39.316 "trsvcid": "$NVMF_PORT", 00:19:39.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:39.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:39.316 "hdgst": ${hdgst:-false}, 00:19:39.316 "ddgst": ${ddgst:-false} 00:19:39.316 }, 00:19:39.316 "method": "bdev_nvme_attach_controller" 00:19:39.316 } 00:19:39.316 EOF 00:19:39.316 )") 00:19:39.316 00:35:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:39.316 00:35:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:39.316 00:35:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:39.316 00:35:05 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:39.316 "params": { 00:19:39.316 "name": "Nvme1", 00:19:39.316 "trtype": "tcp", 00:19:39.316 "traddr": "10.0.0.2", 00:19:39.316 "adrfam": "ipv4", 00:19:39.316 "trsvcid": "4420", 00:19:39.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.316 "hdgst": false, 00:19:39.316 "ddgst": false 00:19:39.316 }, 00:19:39.316 "method": "bdev_nvme_attach_controller" 00:19:39.316 }' 00:19:39.316 [2024-05-15 00:35:05.352372] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:19:39.316 [2024-05-15 00:35:05.352477] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2017946 ] 00:19:39.316 EAL: No free 2048 kB hugepages reported on node 1 00:19:39.316 [2024-05-15 00:35:05.467627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:39.575 [2024-05-15 00:35:05.563971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.575 [2024-05-15 00:35:05.563979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:39.575 [2024-05-15 00:35:05.563979] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.833 I/O targets: 00:19:39.833 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:39.833 00:19:39.833 00:19:39.833 CUnit - A unit testing framework for C - Version 2.1-3 00:19:39.833 http://cunit.sourceforge.net/ 00:19:39.833 00:19:39.833 00:19:39.833 Suite: bdevio tests on: Nvme1n1 00:19:39.833 Test: blockdev write read block ...passed 00:19:40.093 Test: blockdev write zeroes read block ...passed 00:19:40.093 Test: blockdev write zeroes read no split ...passed 00:19:40.093 Test: blockdev write zeroes read split ...passed 00:19:40.093 Test: blockdev write zeroes read split partial ...passed 00:19:40.093 Test: blockdev reset ...[2024-05-15 00:35:06.035332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:40.093 [2024-05-15 00:35:06.035419] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1b80 (9): Bad file descriptor 00:19:40.093 [2024-05-15 00:35:06.087527] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:40.093 passed 00:19:40.093 Test: blockdev write read 8 blocks ...passed 00:19:40.093 Test: blockdev write read size > 128k ...passed 00:19:40.093 Test: blockdev write read invalid size ...passed 00:19:40.093 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:40.093 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:40.093 Test: blockdev write read max offset ...passed 00:19:40.354 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:40.354 Test: blockdev writev readv 8 blocks ...passed 00:19:40.354 Test: blockdev writev readv 30 x 1block ...passed 00:19:40.354 Test: blockdev writev readv block ...passed 00:19:40.354 Test: blockdev writev readv size > 128k ...passed 00:19:40.354 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:40.354 Test: blockdev comparev and writev ...[2024-05-15 00:35:06.387002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.354 [2024-05-15 00:35:06.387041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:40.354 [2024-05-15 00:35:06.387059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.354 [2024-05-15 00:35:06.387068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:40.354 [2024-05-15 00:35:06.387333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.354 [2024-05-15 00:35:06.387345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:40.354 [2024-05-15 00:35:06.387360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.354 [2024-05-15 00:35:06.387369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:40.354 [2024-05-15 00:35:06.387610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.354 [2024-05-15 00:35:06.387620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:40.354 [2024-05-15 00:35:06.387634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.354 [2024-05-15 00:35:06.387646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:40.354 [2024-05-15 00:35:06.387914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.354 [2024-05-15 00:35:06.387925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:40.354 [2024-05-15 00:35:06.387940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:40.354 [2024-05-15 00:35:06.387948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:40.354 passed 00:19:40.354 Test: blockdev nvme passthru rw ...passed 00:19:40.354 Test: blockdev nvme passthru vendor specific ...[2024-05-15 00:35:06.471910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:40.354 [2024-05-15 00:35:06.471941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:40.354 [2024-05-15 00:35:06.472049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:40.354 [2024-05-15 00:35:06.472057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:40.354 [2024-05-15 00:35:06.472153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:40.354 [2024-05-15 00:35:06.472162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:40.354 [2024-05-15 00:35:06.472269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:40.354 [2024-05-15 00:35:06.472277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:40.354 passed 00:19:40.354 Test: blockdev nvme admin passthru ...passed 00:19:40.614 Test: blockdev copy ...passed 00:19:40.614 00:19:40.614 Run Summary: Type Total Ran Passed Failed Inactive 00:19:40.614 suites 1 1 n/a 0 0 00:19:40.614 tests 23 23 23 0 0 00:19:40.614 asserts 152 152 152 0 n/a 00:19:40.614 00:19:40.614 Elapsed time = 1.254 seconds 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:40.873 rmmod nvme_tcp 00:19:40.873 rmmod nvme_fabrics 00:19:40.873 rmmod nvme_keyring 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2017872 ']' 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2017872 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@947 -- # '[' -z 2017872 ']' 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@951 -- # kill -0 2017872 00:19:40.873 00:35:06 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # uname 00:19:40.873 00:35:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:19:40.873 00:35:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2017872 00:19:41.131 00:35:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:19:41.131 00:35:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:19:41.132 00:35:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2017872' 00:19:41.132 killing process with pid 2017872 00:19:41.132 00:35:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # kill 2017872 00:19:41.132 [2024-05-15 00:35:07.044993] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:19:41.132 00:35:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@971 -- # wait 2017872 00:19:41.390 00:35:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:41.390 00:35:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:41.390 00:35:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:41.390 00:35:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:41.390 00:35:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:41.390 00:35:07 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.390 00:35:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.648 00:35:07 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.558 00:35:09 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:43.558 00:19:43.558 real 0m11.278s 00:19:43.558 user 0m15.703s 00:19:43.558 sys 0m5.148s 00:19:43.558 00:35:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # xtrace_disable 00:19:43.558 00:35:09 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:43.558 ************************************ 00:19:43.558 END TEST nvmf_bdevio 00:19:43.558 ************************************ 00:19:43.558 00:35:09 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:43.558 00:35:09 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:19:43.558 00:35:09 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:19:43.558 00:35:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:43.558 ************************************ 00:19:43.558 START TEST nvmf_auth_target 00:19:43.558 ************************************ 00:19:43.558 00:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:43.819 * Looking for test storage... 00:19:43.819 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@57 -- # nvmftestinit 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:43.819 00:35:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.417 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:19:50.418 Found 0000:27:00.0 (0x8086 - 0x159b) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:19:50.418 Found 0000:27:00.1 (0x8086 - 0x159b) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:19:50.418 Found net devices under 0000:27:00.0: cvl_0_0 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:19:50.418 Found net devices under 0000:27:00.1: cvl_0_1 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:50.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:19:50.418 00:19:50.418 --- 10.0.0.2 ping statistics --- 00:19:50.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.418 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:50.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:19:50.418 00:19:50.418 --- 10.0.0.1 ping statistics --- 00:19:50.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.418 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@58 -- # nvmfappstart -L nvmf_auth 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2022416 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2022416 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2022416 ']' 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:50.418 00:35:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # hostpid=2022564 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # gen_dhchap_key null 48 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=44c4c5308d5f0d98f2402d8502b4daa6560eabf4feaa8079 00:19:50.418 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:50.419 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.7Td 00:19:50.419 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 44c4c5308d5f0d98f2402d8502b4daa6560eabf4feaa8079 0 00:19:50.419 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 44c4c5308d5f0d98f2402d8502b4daa6560eabf4feaa8079 0 00:19:50.419 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:50.419 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:50.419 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=44c4c5308d5f0d98f2402d8502b4daa6560eabf4feaa8079 00:19:50.419 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:50.419 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.7Td 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.7Td 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # keys[0]=/tmp/spdk.key-null.7Td 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # gen_dhchap_key sha256 32 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a64f012a1c107792012fac064ae88fe8 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9KM 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a64f012a1c107792012fac064ae88fe8 1 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a64f012a1c107792012fac064ae88fe8 1 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a64f012a1c107792012fac064ae88fe8 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9KM 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9KM 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@65 -- # keys[1]=/tmp/spdk.key-sha256.9KM 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # gen_dhchap_key sha384 48 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=44f7d6e0d7d9fd974c8cf35c2e00c46306a9ae3effe00b9c 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.KbU 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 44f7d6e0d7d9fd974c8cf35c2e00c46306a9ae3effe00b9c 2 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 44f7d6e0d7d9fd974c8cf35c2e00c46306a9ae3effe00b9c 2 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=44f7d6e0d7d9fd974c8cf35c2e00c46306a9ae3effe00b9c 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.KbU 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.KbU 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@66 -- # keys[2]=/tmp/spdk.key-sha384.KbU 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0025d364c8d13c58fd2e07d0233d3c4f924e045be159ef9287638c9cf279d04c 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.yi4 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0025d364c8d13c58fd2e07d0233d3c4f924e045be159ef9287638c9cf279d04c 3 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0025d364c8d13c58fd2e07d0233d3c4f924e045be159ef9287638c9cf279d04c 3 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0025d364c8d13c58fd2e07d0233d3c4f924e045be159ef9287638c9cf279d04c 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.yi4 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.yi4 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[3]=/tmp/spdk.key-sha512.yi4 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # waitforlisten 2022416 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2022416 ']' 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:50.678 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.938 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:50.938 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:19:50.938 00:35:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # waitforlisten 2022564 /var/tmp/host.sock 00:19:50.938 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@828 -- # '[' -z 2022564 ']' 00:19:50.938 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/host.sock 00:19:50.938 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:19:50.938 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:50.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:50.938 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:19:50.938 00:35:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.509 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:19:51.509 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@861 -- # return 0 00:19:51.509 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@71 -- # rpc_cmd 00:19:51.509 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.509 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.509 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.509 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:19:51.509 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.7Td 00:19:51.509 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.509 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.509 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.509 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.7Td 00:19:51.509 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.7Td 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.9KM 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.9KM 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.9KM 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.KbU 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.KbU 00:19:51.769 00:35:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.KbU 00:19:52.029 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@78 -- # for i in "${!keys[@]}" 00:19:52.029 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@79 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.yi4 00:19:52.029 00:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.029 00:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.029 00:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.029 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@80 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.yi4 00:19:52.029 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.yi4 00:19:52.029 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:19:52.029 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.029 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:52.029 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:52.029 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:52.290 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 0 00:19:52.290 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:52.290 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:52.290 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:52.290 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:52.290 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:19:52.290 00:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.290 00:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.290 00:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.290 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:52.290 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:52.549 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:52.549 { 00:19:52.549 "cntlid": 1, 00:19:52.549 "qid": 0, 00:19:52.549 "state": "enabled", 00:19:52.549 "listen_address": { 00:19:52.549 "trtype": "TCP", 00:19:52.549 "adrfam": "IPv4", 00:19:52.549 "traddr": "10.0.0.2", 00:19:52.549 "trsvcid": "4420" 00:19:52.549 }, 00:19:52.549 "peer_address": { 00:19:52.549 "trtype": "TCP", 00:19:52.549 "adrfam": "IPv4", 00:19:52.549 "traddr": "10.0.0.1", 00:19:52.549 "trsvcid": "37086" 00:19:52.549 }, 00:19:52.549 "auth": { 00:19:52.549 "state": "completed", 00:19:52.549 "digest": "sha256", 00:19:52.549 "dhgroup": "null" 00:19:52.549 } 00:19:52.549 } 00:19:52.549 ]' 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:52.549 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:52.808 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.808 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.808 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.808 00:35:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:19:53.376 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.376 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:53.376 00:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.376 00:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.376 00:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.376 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:53.376 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.376 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.637 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 1 00:19:53.637 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:53.637 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.637 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:53.637 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:53.637 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:19:53.637 00:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.637 00:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.637 00:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.637 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:53.637 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:53.897 00:19:53.897 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:53.897 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:53.897 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.897 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.897 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.897 00:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:53.897 00:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.897 00:35:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:53.897 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:53.897 { 00:19:53.897 "cntlid": 3, 00:19:53.897 "qid": 0, 00:19:53.897 "state": "enabled", 00:19:53.897 "listen_address": { 00:19:53.897 "trtype": "TCP", 00:19:53.897 "adrfam": "IPv4", 00:19:53.897 "traddr": "10.0.0.2", 00:19:53.897 "trsvcid": "4420" 00:19:53.897 }, 00:19:53.897 "peer_address": { 00:19:53.897 "trtype": "TCP", 00:19:53.897 "adrfam": "IPv4", 00:19:53.897 "traddr": "10.0.0.1", 00:19:53.897 "trsvcid": "37128" 00:19:53.897 }, 00:19:53.897 "auth": { 00:19:53.897 "state": "completed", 00:19:53.897 "digest": "sha256", 00:19:53.897 "dhgroup": "null" 00:19:53.897 } 00:19:53.897 } 00:19:53.897 ]' 00:19:53.897 00:35:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:53.897 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:53.897 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:54.156 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:54.156 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:54.156 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.156 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.156 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.156 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:19:54.721 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.721 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:54.721 00:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.721 00:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.721 00:35:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.721 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:54.721 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:54.721 00:35:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:54.981 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 2 00:19:54.981 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:54.981 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:54.981 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:54.981 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:54.981 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:19:54.981 00:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:54.981 00:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.981 00:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:54.981 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:54.981 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:55.242 00:19:55.242 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:55.242 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:55.242 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.242 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.242 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.242 00:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:55.242 00:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.242 00:35:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:55.242 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:55.242 { 00:19:55.242 "cntlid": 5, 00:19:55.242 "qid": 0, 00:19:55.242 "state": "enabled", 00:19:55.242 "listen_address": { 00:19:55.242 "trtype": "TCP", 00:19:55.242 "adrfam": "IPv4", 00:19:55.242 "traddr": "10.0.0.2", 00:19:55.242 "trsvcid": "4420" 00:19:55.242 }, 00:19:55.242 "peer_address": { 00:19:55.242 "trtype": "TCP", 00:19:55.242 "adrfam": "IPv4", 00:19:55.242 "traddr": "10.0.0.1", 00:19:55.242 "trsvcid": "37170" 00:19:55.242 }, 00:19:55.242 "auth": { 00:19:55.242 "state": "completed", 00:19:55.242 "digest": "sha256", 00:19:55.242 "dhgroup": "null" 00:19:55.242 } 00:19:55.242 } 00:19:55.242 ]' 00:19:55.242 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:55.503 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:55.503 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:55.503 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:55.503 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:55.503 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.503 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.503 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.503 00:35:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:19:56.072 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 null 3 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.329 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.586 00:19:56.586 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:56.586 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.586 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:56.846 { 00:19:56.846 "cntlid": 7, 00:19:56.846 "qid": 0, 00:19:56.846 "state": "enabled", 00:19:56.846 "listen_address": { 00:19:56.846 "trtype": "TCP", 00:19:56.846 "adrfam": "IPv4", 00:19:56.846 "traddr": "10.0.0.2", 00:19:56.846 "trsvcid": "4420" 00:19:56.846 }, 00:19:56.846 "peer_address": { 00:19:56.846 "trtype": "TCP", 00:19:56.846 "adrfam": "IPv4", 00:19:56.846 "traddr": "10.0.0.1", 00:19:56.846 "trsvcid": "37202" 00:19:56.846 }, 00:19:56.846 "auth": { 00:19:56.846 "state": "completed", 00:19:56.846 "digest": "sha256", 00:19:56.846 "dhgroup": "null" 00:19:56.846 } 00:19:56.846 } 00:19:56.846 ]' 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.846 00:35:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.106 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 0 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.675 00:35:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:57.676 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:57.676 00:35:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:19:57.934 00:19:57.934 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:57.934 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:57.934 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:58.191 { 00:19:58.191 "cntlid": 9, 00:19:58.191 "qid": 0, 00:19:58.191 "state": "enabled", 00:19:58.191 "listen_address": { 00:19:58.191 "trtype": "TCP", 00:19:58.191 "adrfam": "IPv4", 00:19:58.191 "traddr": "10.0.0.2", 00:19:58.191 "trsvcid": "4420" 00:19:58.191 }, 00:19:58.191 "peer_address": { 00:19:58.191 "trtype": "TCP", 00:19:58.191 "adrfam": "IPv4", 00:19:58.191 "traddr": "10.0.0.1", 00:19:58.191 "trsvcid": "37234" 00:19:58.191 }, 00:19:58.191 "auth": { 00:19:58.191 "state": "completed", 00:19:58.191 "digest": "sha256", 00:19:58.191 "dhgroup": "ffdhe2048" 00:19:58.191 } 00:19:58.191 } 00:19:58.191 ]' 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.191 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.449 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:19:59.018 00:35:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.018 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:19:59.018 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.018 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.018 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.018 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:19:59.018 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.018 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:59.278 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 1 00:19:59.278 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:19:59.278 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:59.278 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:59.278 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:59.278 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:19:59.278 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.278 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.278 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.279 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:59.279 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:19:59.279 00:19:59.279 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:19:59.279 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.279 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:19:59.537 { 00:19:59.537 "cntlid": 11, 00:19:59.537 "qid": 0, 00:19:59.537 "state": "enabled", 00:19:59.537 "listen_address": { 00:19:59.537 "trtype": "TCP", 00:19:59.537 "adrfam": "IPv4", 00:19:59.537 "traddr": "10.0.0.2", 00:19:59.537 "trsvcid": "4420" 00:19:59.537 }, 00:19:59.537 "peer_address": { 00:19:59.537 "trtype": "TCP", 00:19:59.537 "adrfam": "IPv4", 00:19:59.537 "traddr": "10.0.0.1", 00:19:59.537 "trsvcid": "37256" 00:19:59.537 }, 00:19:59.537 "auth": { 00:19:59.537 "state": "completed", 00:19:59.537 "digest": "sha256", 00:19:59.537 "dhgroup": "ffdhe2048" 00:19:59.537 } 00:19:59.537 } 00:19:59.537 ]' 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.537 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.795 00:35:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:20:00.361 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.361 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:00.361 00:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.361 00:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.361 00:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.361 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:00.361 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:00.361 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:00.621 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 2 00:20:00.621 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:00.621 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:00.621 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:00.621 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:00.621 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:20:00.621 00:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.621 00:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.621 00:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.621 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:00.621 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:00.621 00:20:00.882 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:00.882 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.882 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:00.882 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.882 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.882 00:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:00.882 00:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.882 00:35:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:00.882 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:00.882 { 00:20:00.882 "cntlid": 13, 00:20:00.882 "qid": 0, 00:20:00.882 "state": "enabled", 00:20:00.882 "listen_address": { 00:20:00.882 "trtype": "TCP", 00:20:00.882 "adrfam": "IPv4", 00:20:00.882 "traddr": "10.0.0.2", 00:20:00.882 "trsvcid": "4420" 00:20:00.882 }, 00:20:00.882 "peer_address": { 00:20:00.882 "trtype": "TCP", 00:20:00.882 "adrfam": "IPv4", 00:20:00.882 "traddr": "10.0.0.1", 00:20:00.882 "trsvcid": "37288" 00:20:00.882 }, 00:20:00.882 "auth": { 00:20:00.882 "state": "completed", 00:20:00.882 "digest": "sha256", 00:20:00.882 "dhgroup": "ffdhe2048" 00:20:00.882 } 00:20:00.882 } 00:20:00.882 ]' 00:20:00.882 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:00.882 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.882 00:35:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:00.882 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.882 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:01.142 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.142 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.142 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.142 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:20:02.081 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.081 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:02.081 00:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.081 00:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.081 00:35:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.081 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:02.081 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.081 00:35:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.081 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe2048 3 00:20:02.081 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:02.081 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.081 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:02.081 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:02.081 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:20:02.081 00:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.081 00:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.081 00:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.081 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.081 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.340 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:02.340 { 00:20:02.340 "cntlid": 15, 00:20:02.340 "qid": 0, 00:20:02.340 "state": "enabled", 00:20:02.340 "listen_address": { 00:20:02.340 "trtype": "TCP", 00:20:02.340 "adrfam": "IPv4", 00:20:02.340 "traddr": "10.0.0.2", 00:20:02.340 "trsvcid": "4420" 00:20:02.340 }, 00:20:02.340 "peer_address": { 00:20:02.340 "trtype": "TCP", 00:20:02.340 "adrfam": "IPv4", 00:20:02.340 "traddr": "10.0.0.1", 00:20:02.340 "trsvcid": "43460" 00:20:02.340 }, 00:20:02.340 "auth": { 00:20:02.340 "state": "completed", 00:20:02.340 "digest": "sha256", 00:20:02.340 "dhgroup": "ffdhe2048" 00:20:02.340 } 00:20:02.340 } 00:20:02.340 ]' 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:02.340 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:02.598 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.599 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.599 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.599 00:35:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:20:03.164 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.164 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.164 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:03.164 00:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.164 00:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.164 00:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.164 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:03.164 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:03.164 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.164 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:03.423 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 0 00:20:03.423 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:03.423 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:03.423 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:03.423 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:03.423 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:20:03.423 00:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.423 00:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.423 00:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.423 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:03.423 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:03.681 00:20:03.681 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:03.681 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.681 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:03.681 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.681 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.681 00:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:03.681 00:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.681 00:35:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:03.681 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:03.681 { 00:20:03.681 "cntlid": 17, 00:20:03.681 "qid": 0, 00:20:03.681 "state": "enabled", 00:20:03.681 "listen_address": { 00:20:03.681 "trtype": "TCP", 00:20:03.681 "adrfam": "IPv4", 00:20:03.681 "traddr": "10.0.0.2", 00:20:03.681 "trsvcid": "4420" 00:20:03.681 }, 00:20:03.681 "peer_address": { 00:20:03.681 "trtype": "TCP", 00:20:03.681 "adrfam": "IPv4", 00:20:03.681 "traddr": "10.0.0.1", 00:20:03.681 "trsvcid": "43494" 00:20:03.681 }, 00:20:03.681 "auth": { 00:20:03.681 "state": "completed", 00:20:03.681 "digest": "sha256", 00:20:03.681 "dhgroup": "ffdhe3072" 00:20:03.681 } 00:20:03.681 } 00:20:03.681 ]' 00:20:03.681 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:03.939 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.939 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:03.939 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:03.939 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:03.939 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.939 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.939 00:35:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.939 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:20:04.505 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.505 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.505 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:04.505 00:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.505 00:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 1 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:04.764 00:35:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:05.024 00:20:05.024 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:05.024 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:05.024 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.024 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.024 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.024 00:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:05.024 00:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.024 00:35:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:05.024 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:05.024 { 00:20:05.024 "cntlid": 19, 00:20:05.024 "qid": 0, 00:20:05.024 "state": "enabled", 00:20:05.024 "listen_address": { 00:20:05.024 "trtype": "TCP", 00:20:05.024 "adrfam": "IPv4", 00:20:05.024 "traddr": "10.0.0.2", 00:20:05.024 "trsvcid": "4420" 00:20:05.024 }, 00:20:05.024 "peer_address": { 00:20:05.024 "trtype": "TCP", 00:20:05.024 "adrfam": "IPv4", 00:20:05.024 "traddr": "10.0.0.1", 00:20:05.024 "trsvcid": "43516" 00:20:05.024 }, 00:20:05.024 "auth": { 00:20:05.024 "state": "completed", 00:20:05.024 "digest": "sha256", 00:20:05.024 "dhgroup": "ffdhe3072" 00:20:05.024 } 00:20:05.024 } 00:20:05.024 ]' 00:20:05.024 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:05.284 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.284 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:05.284 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.284 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:05.284 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.284 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.284 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.543 00:35:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 2 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:06.109 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:06.368 00:20:06.368 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:06.369 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.369 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:06.629 { 00:20:06.629 "cntlid": 21, 00:20:06.629 "qid": 0, 00:20:06.629 "state": "enabled", 00:20:06.629 "listen_address": { 00:20:06.629 "trtype": "TCP", 00:20:06.629 "adrfam": "IPv4", 00:20:06.629 "traddr": "10.0.0.2", 00:20:06.629 "trsvcid": "4420" 00:20:06.629 }, 00:20:06.629 "peer_address": { 00:20:06.629 "trtype": "TCP", 00:20:06.629 "adrfam": "IPv4", 00:20:06.629 "traddr": "10.0.0.1", 00:20:06.629 "trsvcid": "43546" 00:20:06.629 }, 00:20:06.629 "auth": { 00:20:06.629 "state": "completed", 00:20:06.629 "digest": "sha256", 00:20:06.629 "dhgroup": "ffdhe3072" 00:20:06.629 } 00:20:06.629 } 00:20:06.629 ]' 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.629 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.890 00:35:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:20:07.474 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.474 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.474 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:07.474 00:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.474 00:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.474 00:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.474 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:07.475 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:07.475 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:07.475 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe3072 3 00:20:07.475 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:07.475 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:07.475 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:07.475 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:07.475 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:20:07.475 00:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.475 00:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.475 00:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.475 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.475 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:07.739 00:20:07.739 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:07.739 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.739 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:07.999 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.999 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.999 00:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:07.999 00:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.999 00:35:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:07.999 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:07.999 { 00:20:07.999 "cntlid": 23, 00:20:07.999 "qid": 0, 00:20:07.999 "state": "enabled", 00:20:07.999 "listen_address": { 00:20:07.999 "trtype": "TCP", 00:20:07.999 "adrfam": "IPv4", 00:20:07.999 "traddr": "10.0.0.2", 00:20:07.999 "trsvcid": "4420" 00:20:07.999 }, 00:20:07.999 "peer_address": { 00:20:07.999 "trtype": "TCP", 00:20:07.999 "adrfam": "IPv4", 00:20:07.999 "traddr": "10.0.0.1", 00:20:07.999 "trsvcid": "43578" 00:20:07.999 }, 00:20:07.999 "auth": { 00:20:07.999 "state": "completed", 00:20:07.999 "digest": "sha256", 00:20:07.999 "dhgroup": "ffdhe3072" 00:20:07.999 } 00:20:07.999 } 00:20:07.999 ]' 00:20:07.999 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:07.999 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.999 00:35:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:07.999 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.999 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:07.999 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.999 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.999 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.260 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:20:08.829 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.829 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:08.829 00:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:08.829 00:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.829 00:35:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:08.829 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.829 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:08.829 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:08.829 00:35:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:09.088 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 0 00:20:09.088 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:09.088 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:09.088 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:09.088 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:09.088 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:20:09.088 00:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.088 00:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.088 00:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.088 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:09.088 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:09.346 00:20:09.346 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:09.346 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.346 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:09.346 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.346 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.346 00:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:09.346 00:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.346 00:35:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:09.346 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:09.346 { 00:20:09.346 "cntlid": 25, 00:20:09.346 "qid": 0, 00:20:09.346 "state": "enabled", 00:20:09.346 "listen_address": { 00:20:09.346 "trtype": "TCP", 00:20:09.346 "adrfam": "IPv4", 00:20:09.346 "traddr": "10.0.0.2", 00:20:09.346 "trsvcid": "4420" 00:20:09.346 }, 00:20:09.346 "peer_address": { 00:20:09.346 "trtype": "TCP", 00:20:09.346 "adrfam": "IPv4", 00:20:09.346 "traddr": "10.0.0.1", 00:20:09.346 "trsvcid": "43612" 00:20:09.346 }, 00:20:09.346 "auth": { 00:20:09.346 "state": "completed", 00:20:09.346 "digest": "sha256", 00:20:09.346 "dhgroup": "ffdhe4096" 00:20:09.346 } 00:20:09.346 } 00:20:09.346 ]' 00:20:09.346 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:09.346 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.346 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:09.604 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:09.604 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:09.604 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.604 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.604 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.604 00:35:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:20:10.172 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.172 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:10.172 00:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.172 00:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.172 00:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.172 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:10.172 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.172 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:10.431 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 1 00:20:10.431 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:10.431 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:10.431 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:10.431 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.431 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:20:10.431 00:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.431 00:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.431 00:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.431 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:10.431 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:10.690 00:20:10.690 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:10.690 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.690 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:10.690 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.690 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.690 00:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:10.690 00:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.690 00:35:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:10.690 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:10.690 { 00:20:10.690 "cntlid": 27, 00:20:10.690 "qid": 0, 00:20:10.690 "state": "enabled", 00:20:10.690 "listen_address": { 00:20:10.690 "trtype": "TCP", 00:20:10.690 "adrfam": "IPv4", 00:20:10.690 "traddr": "10.0.0.2", 00:20:10.690 "trsvcid": "4420" 00:20:10.690 }, 00:20:10.690 "peer_address": { 00:20:10.690 "trtype": "TCP", 00:20:10.690 "adrfam": "IPv4", 00:20:10.690 "traddr": "10.0.0.1", 00:20:10.690 "trsvcid": "43648" 00:20:10.690 }, 00:20:10.690 "auth": { 00:20:10.690 "state": "completed", 00:20:10.690 "digest": "sha256", 00:20:10.690 "dhgroup": "ffdhe4096" 00:20:10.690 } 00:20:10.690 } 00:20:10.690 ]' 00:20:10.690 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:10.948 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.948 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:10.948 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:10.948 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:10.948 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.948 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.948 00:35:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.948 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:20:11.513 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.773 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 2 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:11.773 00:35:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:12.033 00:20:12.033 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:12.033 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.033 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:12.291 { 00:20:12.291 "cntlid": 29, 00:20:12.291 "qid": 0, 00:20:12.291 "state": "enabled", 00:20:12.291 "listen_address": { 00:20:12.291 "trtype": "TCP", 00:20:12.291 "adrfam": "IPv4", 00:20:12.291 "traddr": "10.0.0.2", 00:20:12.291 "trsvcid": "4420" 00:20:12.291 }, 00:20:12.291 "peer_address": { 00:20:12.291 "trtype": "TCP", 00:20:12.291 "adrfam": "IPv4", 00:20:12.291 "traddr": "10.0.0.1", 00:20:12.291 "trsvcid": "48342" 00:20:12.291 }, 00:20:12.291 "auth": { 00:20:12.291 "state": "completed", 00:20:12.291 "digest": "sha256", 00:20:12.291 "dhgroup": "ffdhe4096" 00:20:12.291 } 00:20:12.291 } 00:20:12.291 ]' 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.291 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.550 00:35:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe4096 3 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.219 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:13.479 00:20:13.479 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:13.479 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.479 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:13.479 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.479 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.479 00:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:13.479 00:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.479 00:35:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:13.479 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:13.479 { 00:20:13.479 "cntlid": 31, 00:20:13.479 "qid": 0, 00:20:13.479 "state": "enabled", 00:20:13.479 "listen_address": { 00:20:13.479 "trtype": "TCP", 00:20:13.479 "adrfam": "IPv4", 00:20:13.479 "traddr": "10.0.0.2", 00:20:13.479 "trsvcid": "4420" 00:20:13.479 }, 00:20:13.479 "peer_address": { 00:20:13.479 "trtype": "TCP", 00:20:13.479 "adrfam": "IPv4", 00:20:13.479 "traddr": "10.0.0.1", 00:20:13.479 "trsvcid": "48360" 00:20:13.479 }, 00:20:13.479 "auth": { 00:20:13.479 "state": "completed", 00:20:13.479 "digest": "sha256", 00:20:13.479 "dhgroup": "ffdhe4096" 00:20:13.479 } 00:20:13.479 } 00:20:13.479 ]' 00:20:13.479 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:13.738 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.738 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:13.738 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.738 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:13.738 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.738 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.738 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.738 00:35:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 0 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.683 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:14.684 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:14.942 00:20:14.942 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:14.942 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:14.942 00:35:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.942 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.942 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.942 00:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:14.942 00:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.942 00:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:14.942 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:14.942 { 00:20:14.942 "cntlid": 33, 00:20:14.942 "qid": 0, 00:20:14.942 "state": "enabled", 00:20:14.942 "listen_address": { 00:20:14.942 "trtype": "TCP", 00:20:14.942 "adrfam": "IPv4", 00:20:14.942 "traddr": "10.0.0.2", 00:20:14.942 "trsvcid": "4420" 00:20:14.942 }, 00:20:14.942 "peer_address": { 00:20:14.942 "trtype": "TCP", 00:20:14.942 "adrfam": "IPv4", 00:20:14.942 "traddr": "10.0.0.1", 00:20:14.942 "trsvcid": "48384" 00:20:14.942 }, 00:20:14.942 "auth": { 00:20:14.942 "state": "completed", 00:20:14.942 "digest": "sha256", 00:20:14.942 "dhgroup": "ffdhe6144" 00:20:14.942 } 00:20:14.942 } 00:20:14.942 ]' 00:20:14.942 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:15.201 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.201 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:15.201 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:15.201 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:15.201 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.201 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.201 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.201 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:20:15.771 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.032 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:16.032 00:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.032 00:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.032 00:35:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.032 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:16.032 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.032 00:35:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:16.032 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 1 00:20:16.032 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:16.032 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:16.032 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:16.032 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:16.032 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:20:16.032 00:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.032 00:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.032 00:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.032 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:16.032 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:16.292 00:20:16.292 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:16.292 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.292 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:16.553 { 00:20:16.553 "cntlid": 35, 00:20:16.553 "qid": 0, 00:20:16.553 "state": "enabled", 00:20:16.553 "listen_address": { 00:20:16.553 "trtype": "TCP", 00:20:16.553 "adrfam": "IPv4", 00:20:16.553 "traddr": "10.0.0.2", 00:20:16.553 "trsvcid": "4420" 00:20:16.553 }, 00:20:16.553 "peer_address": { 00:20:16.553 "trtype": "TCP", 00:20:16.553 "adrfam": "IPv4", 00:20:16.553 "traddr": "10.0.0.1", 00:20:16.553 "trsvcid": "48418" 00:20:16.553 }, 00:20:16.553 "auth": { 00:20:16.553 "state": "completed", 00:20:16.553 "digest": "sha256", 00:20:16.553 "dhgroup": "ffdhe6144" 00:20:16.553 } 00:20:16.553 } 00:20:16.553 ]' 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.553 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.813 00:35:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:20:17.380 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.380 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:17.380 00:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.380 00:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.380 00:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.380 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:17.380 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.380 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:17.641 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 2 00:20:17.641 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:17.641 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:17.641 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:17.641 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:17.641 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:20:17.641 00:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.641 00:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.641 00:35:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:17.641 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:17.641 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:17.902 00:20:17.902 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:17.902 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.902 00:35:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:17.902 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.902 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.902 00:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:17.902 00:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.162 00:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.162 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:18.162 { 00:20:18.162 "cntlid": 37, 00:20:18.162 "qid": 0, 00:20:18.162 "state": "enabled", 00:20:18.162 "listen_address": { 00:20:18.162 "trtype": "TCP", 00:20:18.162 "adrfam": "IPv4", 00:20:18.162 "traddr": "10.0.0.2", 00:20:18.162 "trsvcid": "4420" 00:20:18.162 }, 00:20:18.162 "peer_address": { 00:20:18.162 "trtype": "TCP", 00:20:18.162 "adrfam": "IPv4", 00:20:18.162 "traddr": "10.0.0.1", 00:20:18.162 "trsvcid": "48436" 00:20:18.162 }, 00:20:18.163 "auth": { 00:20:18.163 "state": "completed", 00:20:18.163 "digest": "sha256", 00:20:18.163 "dhgroup": "ffdhe6144" 00:20:18.163 } 00:20:18.163 } 00:20:18.163 ]' 00:20:18.163 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:18.163 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.163 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:18.163 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:18.163 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:18.163 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.163 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.163 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.421 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:20:18.987 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.987 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:18.987 00:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.987 00:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.987 00:35:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.987 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:18.987 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.987 00:35:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:18.987 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe6144 3 00:20:18.987 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:18.987 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:18.987 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:18.987 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:18.987 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:20:18.987 00:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:18.987 00:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.987 00:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:18.987 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.987 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:19.247 00:20:19.247 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:19.247 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:19.247 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:19.507 { 00:20:19.507 "cntlid": 39, 00:20:19.507 "qid": 0, 00:20:19.507 "state": "enabled", 00:20:19.507 "listen_address": { 00:20:19.507 "trtype": "TCP", 00:20:19.507 "adrfam": "IPv4", 00:20:19.507 "traddr": "10.0.0.2", 00:20:19.507 "trsvcid": "4420" 00:20:19.507 }, 00:20:19.507 "peer_address": { 00:20:19.507 "trtype": "TCP", 00:20:19.507 "adrfam": "IPv4", 00:20:19.507 "traddr": "10.0.0.1", 00:20:19.507 "trsvcid": "48452" 00:20:19.507 }, 00:20:19.507 "auth": { 00:20:19.507 "state": "completed", 00:20:19.507 "digest": "sha256", 00:20:19.507 "dhgroup": "ffdhe6144" 00:20:19.507 } 00:20:19.507 } 00:20:19.507 ]' 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.507 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.767 00:35:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:20:20.335 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.335 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:20.335 00:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.335 00:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.335 00:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.335 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:20.335 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:20.335 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.335 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:20.593 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 0 00:20:20.593 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:20.593 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:20.593 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:20.593 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:20.593 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:20:20.593 00:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:20.593 00:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.593 00:35:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:20.593 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:20.593 00:35:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:21.162 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:21.162 { 00:20:21.162 "cntlid": 41, 00:20:21.162 "qid": 0, 00:20:21.162 "state": "enabled", 00:20:21.162 "listen_address": { 00:20:21.162 "trtype": "TCP", 00:20:21.162 "adrfam": "IPv4", 00:20:21.162 "traddr": "10.0.0.2", 00:20:21.162 "trsvcid": "4420" 00:20:21.162 }, 00:20:21.162 "peer_address": { 00:20:21.162 "trtype": "TCP", 00:20:21.162 "adrfam": "IPv4", 00:20:21.162 "traddr": "10.0.0.1", 00:20:21.162 "trsvcid": "48492" 00:20:21.162 }, 00:20:21.162 "auth": { 00:20:21.162 "state": "completed", 00:20:21.162 "digest": "sha256", 00:20:21.162 "dhgroup": "ffdhe8192" 00:20:21.162 } 00:20:21.162 } 00:20:21.162 ]' 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.162 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.422 00:35:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:20:21.988 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.988 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:21.988 00:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:21.988 00:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.988 00:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:21.988 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:21.988 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:21.988 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:22.246 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 1 00:20:22.246 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:22.246 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:22.246 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:22.246 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:22.246 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:20:22.246 00:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.246 00:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.246 00:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.246 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:22.246 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:22.813 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:22.813 { 00:20:22.813 "cntlid": 43, 00:20:22.813 "qid": 0, 00:20:22.813 "state": "enabled", 00:20:22.813 "listen_address": { 00:20:22.813 "trtype": "TCP", 00:20:22.813 "adrfam": "IPv4", 00:20:22.813 "traddr": "10.0.0.2", 00:20:22.813 "trsvcid": "4420" 00:20:22.813 }, 00:20:22.813 "peer_address": { 00:20:22.813 "trtype": "TCP", 00:20:22.813 "adrfam": "IPv4", 00:20:22.813 "traddr": "10.0.0.1", 00:20:22.813 "trsvcid": "44986" 00:20:22.813 }, 00:20:22.813 "auth": { 00:20:22.813 "state": "completed", 00:20:22.813 "digest": "sha256", 00:20:22.813 "dhgroup": "ffdhe8192" 00:20:22.813 } 00:20:22.813 } 00:20:22.813 ]' 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.813 00:35:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.073 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:20:23.640 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.640 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:23.640 00:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.640 00:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.640 00:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.640 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:23.640 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.640 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:23.898 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 2 00:20:23.898 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:23.898 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:23.898 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:23.898 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:23.898 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:20:23.898 00:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:23.898 00:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.898 00:35:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:23.898 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:23.898 00:35:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:24.466 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:24.466 { 00:20:24.466 "cntlid": 45, 00:20:24.466 "qid": 0, 00:20:24.466 "state": "enabled", 00:20:24.466 "listen_address": { 00:20:24.466 "trtype": "TCP", 00:20:24.466 "adrfam": "IPv4", 00:20:24.466 "traddr": "10.0.0.2", 00:20:24.466 "trsvcid": "4420" 00:20:24.466 }, 00:20:24.466 "peer_address": { 00:20:24.466 "trtype": "TCP", 00:20:24.466 "adrfam": "IPv4", 00:20:24.466 "traddr": "10.0.0.1", 00:20:24.466 "trsvcid": "45030" 00:20:24.466 }, 00:20:24.466 "auth": { 00:20:24.466 "state": "completed", 00:20:24.466 "digest": "sha256", 00:20:24.466 "dhgroup": "ffdhe8192" 00:20:24.466 } 00:20:24.466 } 00:20:24.466 ]' 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.466 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.725 00:35:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:20:25.293 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.293 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:25.293 00:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.293 00:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.293 00:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.293 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:25.293 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.293 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.551 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha256 ffdhe8192 3 00:20:25.551 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:25.551 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:25.551 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:25.551 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:25.551 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:20:25.551 00:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:25.551 00:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.551 00:35:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:25.551 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.551 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.808 00:20:25.808 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:25.808 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.808 00:35:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:26.066 { 00:20:26.066 "cntlid": 47, 00:20:26.066 "qid": 0, 00:20:26.066 "state": "enabled", 00:20:26.066 "listen_address": { 00:20:26.066 "trtype": "TCP", 00:20:26.066 "adrfam": "IPv4", 00:20:26.066 "traddr": "10.0.0.2", 00:20:26.066 "trsvcid": "4420" 00:20:26.066 }, 00:20:26.066 "peer_address": { 00:20:26.066 "trtype": "TCP", 00:20:26.066 "adrfam": "IPv4", 00:20:26.066 "traddr": "10.0.0.1", 00:20:26.066 "trsvcid": "45056" 00:20:26.066 }, 00:20:26.066 "auth": { 00:20:26.066 "state": "completed", 00:20:26.066 "digest": "sha256", 00:20:26.066 "dhgroup": "ffdhe8192" 00:20:26.066 } 00:20:26.066 } 00:20:26.066 ]' 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.066 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.323 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:20:26.893 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.893 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:26.893 00:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:26.893 00:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.893 00:35:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:26.893 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:20:26.893 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.893 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:26.893 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:26.893 00:35:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:27.153 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 0 00:20:27.153 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:27.153 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.153 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:27.153 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.153 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:20:27.153 00:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.153 00:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.153 00:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.153 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:27.153 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:27.412 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:27.412 { 00:20:27.412 "cntlid": 49, 00:20:27.412 "qid": 0, 00:20:27.412 "state": "enabled", 00:20:27.412 "listen_address": { 00:20:27.412 "trtype": "TCP", 00:20:27.412 "adrfam": "IPv4", 00:20:27.412 "traddr": "10.0.0.2", 00:20:27.412 "trsvcid": "4420" 00:20:27.412 }, 00:20:27.412 "peer_address": { 00:20:27.412 "trtype": "TCP", 00:20:27.412 "adrfam": "IPv4", 00:20:27.412 "traddr": "10.0.0.1", 00:20:27.412 "trsvcid": "45084" 00:20:27.412 }, 00:20:27.412 "auth": { 00:20:27.412 "state": "completed", 00:20:27.412 "digest": "sha384", 00:20:27.412 "dhgroup": "null" 00:20:27.412 } 00:20:27.412 } 00:20:27.412 ]' 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:27.412 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:27.670 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.670 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.670 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.670 00:35:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:20:28.237 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.237 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:28.237 00:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.237 00:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.237 00:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.237 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:28.237 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:28.237 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:28.497 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 1 00:20:28.497 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:28.497 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:28.497 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:28.497 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:28.497 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:20:28.497 00:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.497 00:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.497 00:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.497 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:28.497 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:28.761 00:20:28.761 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:28.761 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.761 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:28.761 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.761 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.761 00:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:28.761 00:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.761 00:35:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:28.761 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:28.761 { 00:20:28.761 "cntlid": 51, 00:20:28.761 "qid": 0, 00:20:28.761 "state": "enabled", 00:20:28.761 "listen_address": { 00:20:28.761 "trtype": "TCP", 00:20:28.761 "adrfam": "IPv4", 00:20:28.761 "traddr": "10.0.0.2", 00:20:28.761 "trsvcid": "4420" 00:20:28.761 }, 00:20:28.761 "peer_address": { 00:20:28.761 "trtype": "TCP", 00:20:28.761 "adrfam": "IPv4", 00:20:28.761 "traddr": "10.0.0.1", 00:20:28.761 "trsvcid": "45112" 00:20:28.761 }, 00:20:28.761 "auth": { 00:20:28.761 "state": "completed", 00:20:28.761 "digest": "sha384", 00:20:28.761 "dhgroup": "null" 00:20:28.761 } 00:20:28.761 } 00:20:28.761 ]' 00:20:28.761 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:28.761 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.761 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:29.019 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:29.019 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:29.019 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.019 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.019 00:35:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.019 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:20:29.593 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.593 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:29.593 00:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.593 00:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.593 00:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.593 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:29.593 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.593 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:29.852 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 2 00:20:29.852 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:29.852 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.852 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:29.852 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:29.852 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:20:29.852 00:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:29.852 00:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.852 00:35:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:29.852 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:29.852 00:35:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:30.112 00:20:30.112 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:30.112 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:30.112 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.112 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.112 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.112 00:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:30.112 00:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.112 00:35:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:30.112 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:30.112 { 00:20:30.112 "cntlid": 53, 00:20:30.112 "qid": 0, 00:20:30.112 "state": "enabled", 00:20:30.112 "listen_address": { 00:20:30.112 "trtype": "TCP", 00:20:30.112 "adrfam": "IPv4", 00:20:30.112 "traddr": "10.0.0.2", 00:20:30.112 "trsvcid": "4420" 00:20:30.112 }, 00:20:30.112 "peer_address": { 00:20:30.112 "trtype": "TCP", 00:20:30.112 "adrfam": "IPv4", 00:20:30.112 "traddr": "10.0.0.1", 00:20:30.113 "trsvcid": "45132" 00:20:30.113 }, 00:20:30.113 "auth": { 00:20:30.113 "state": "completed", 00:20:30.113 "digest": "sha384", 00:20:30.113 "dhgroup": "null" 00:20:30.113 } 00:20:30.113 } 00:20:30.113 ]' 00:20:30.113 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:30.373 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.373 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:30.373 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:30.373 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:30.373 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.373 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.373 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.373 00:35:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 null 3 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:31.310 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.310 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:31.569 { 00:20:31.569 "cntlid": 55, 00:20:31.569 "qid": 0, 00:20:31.569 "state": "enabled", 00:20:31.569 "listen_address": { 00:20:31.569 "trtype": "TCP", 00:20:31.569 "adrfam": "IPv4", 00:20:31.569 "traddr": "10.0.0.2", 00:20:31.569 "trsvcid": "4420" 00:20:31.569 }, 00:20:31.569 "peer_address": { 00:20:31.569 "trtype": "TCP", 00:20:31.569 "adrfam": "IPv4", 00:20:31.569 "traddr": "10.0.0.1", 00:20:31.569 "trsvcid": "45166" 00:20:31.569 }, 00:20:31.569 "auth": { 00:20:31.569 "state": "completed", 00:20:31.569 "digest": "sha384", 00:20:31.569 "dhgroup": "null" 00:20:31.569 } 00:20:31.569 } 00:20:31.569 ]' 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.569 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.828 00:35:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:20:32.397 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 0 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:32.657 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:32.916 00:20:32.916 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:32.916 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.916 00:35:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:32.916 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.916 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.916 00:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:32.916 00:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.916 00:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:32.916 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:32.916 { 00:20:32.916 "cntlid": 57, 00:20:32.916 "qid": 0, 00:20:32.916 "state": "enabled", 00:20:32.916 "listen_address": { 00:20:32.916 "trtype": "TCP", 00:20:32.916 "adrfam": "IPv4", 00:20:32.916 "traddr": "10.0.0.2", 00:20:32.916 "trsvcid": "4420" 00:20:32.916 }, 00:20:32.916 "peer_address": { 00:20:32.916 "trtype": "TCP", 00:20:32.916 "adrfam": "IPv4", 00:20:32.916 "traddr": "10.0.0.1", 00:20:32.916 "trsvcid": "54346" 00:20:32.916 }, 00:20:32.916 "auth": { 00:20:32.916 "state": "completed", 00:20:32.916 "digest": "sha384", 00:20:32.916 "dhgroup": "ffdhe2048" 00:20:32.916 } 00:20:32.916 } 00:20:32.916 ]' 00:20:32.916 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:33.174 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.174 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:33.174 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:33.174 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:33.174 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.174 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.174 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.174 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:20:33.742 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.001 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:34.001 00:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.001 00:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.001 00:35:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.001 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:34.001 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.001 00:35:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.001 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 1 00:20:34.001 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:34.001 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.001 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:34.001 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:34.001 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:20:34.001 00:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.001 00:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.001 00:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.001 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:34.001 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:34.258 00:20:34.258 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:34.258 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:34.258 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:34.516 { 00:20:34.516 "cntlid": 59, 00:20:34.516 "qid": 0, 00:20:34.516 "state": "enabled", 00:20:34.516 "listen_address": { 00:20:34.516 "trtype": "TCP", 00:20:34.516 "adrfam": "IPv4", 00:20:34.516 "traddr": "10.0.0.2", 00:20:34.516 "trsvcid": "4420" 00:20:34.516 }, 00:20:34.516 "peer_address": { 00:20:34.516 "trtype": "TCP", 00:20:34.516 "adrfam": "IPv4", 00:20:34.516 "traddr": "10.0.0.1", 00:20:34.516 "trsvcid": "54374" 00:20:34.516 }, 00:20:34.516 "auth": { 00:20:34.516 "state": "completed", 00:20:34.516 "digest": "sha384", 00:20:34.516 "dhgroup": "ffdhe2048" 00:20:34.516 } 00:20:34.516 } 00:20:34.516 ]' 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.516 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.774 00:36:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 2 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:35.341 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:35.599 00:20:35.599 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:35.599 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.599 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:35.857 { 00:20:35.857 "cntlid": 61, 00:20:35.857 "qid": 0, 00:20:35.857 "state": "enabled", 00:20:35.857 "listen_address": { 00:20:35.857 "trtype": "TCP", 00:20:35.857 "adrfam": "IPv4", 00:20:35.857 "traddr": "10.0.0.2", 00:20:35.857 "trsvcid": "4420" 00:20:35.857 }, 00:20:35.857 "peer_address": { 00:20:35.857 "trtype": "TCP", 00:20:35.857 "adrfam": "IPv4", 00:20:35.857 "traddr": "10.0.0.1", 00:20:35.857 "trsvcid": "54412" 00:20:35.857 }, 00:20:35.857 "auth": { 00:20:35.857 "state": "completed", 00:20:35.857 "digest": "sha384", 00:20:35.857 "dhgroup": "ffdhe2048" 00:20:35.857 } 00:20:35.857 } 00:20:35.857 ]' 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.857 00:36:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.118 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe2048 3 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.684 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.942 00:20:36.942 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:36.942 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.942 00:36:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:36.942 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.942 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.942 00:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:36.942 00:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.267 00:36:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:37.267 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:37.267 { 00:20:37.267 "cntlid": 63, 00:20:37.267 "qid": 0, 00:20:37.267 "state": "enabled", 00:20:37.267 "listen_address": { 00:20:37.267 "trtype": "TCP", 00:20:37.267 "adrfam": "IPv4", 00:20:37.267 "traddr": "10.0.0.2", 00:20:37.267 "trsvcid": "4420" 00:20:37.267 }, 00:20:37.267 "peer_address": { 00:20:37.267 "trtype": "TCP", 00:20:37.267 "adrfam": "IPv4", 00:20:37.267 "traddr": "10.0.0.1", 00:20:37.267 "trsvcid": "54448" 00:20:37.267 }, 00:20:37.267 "auth": { 00:20:37.267 "state": "completed", 00:20:37.267 "digest": "sha384", 00:20:37.267 "dhgroup": "ffdhe2048" 00:20:37.267 } 00:20:37.267 } 00:20:37.267 ]' 00:20:37.267 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:37.267 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.267 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:37.267 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:37.267 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:37.267 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.267 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.267 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.267 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:20:37.850 00:36:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.110 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 0 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:38.110 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:38.368 00:20:38.368 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:38.368 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:38.368 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:38.626 { 00:20:38.626 "cntlid": 65, 00:20:38.626 "qid": 0, 00:20:38.626 "state": "enabled", 00:20:38.626 "listen_address": { 00:20:38.626 "trtype": "TCP", 00:20:38.626 "adrfam": "IPv4", 00:20:38.626 "traddr": "10.0.0.2", 00:20:38.626 "trsvcid": "4420" 00:20:38.626 }, 00:20:38.626 "peer_address": { 00:20:38.626 "trtype": "TCP", 00:20:38.626 "adrfam": "IPv4", 00:20:38.626 "traddr": "10.0.0.1", 00:20:38.626 "trsvcid": "54476" 00:20:38.626 }, 00:20:38.626 "auth": { 00:20:38.626 "state": "completed", 00:20:38.626 "digest": "sha384", 00:20:38.626 "dhgroup": "ffdhe3072" 00:20:38.626 } 00:20:38.626 } 00:20:38.626 ]' 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.626 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.884 00:36:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 1 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:39.454 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:39.714 00:20:39.714 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:39.714 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.714 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:39.972 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.972 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.972 00:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:39.972 00:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.972 00:36:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:39.972 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:39.972 { 00:20:39.972 "cntlid": 67, 00:20:39.972 "qid": 0, 00:20:39.972 "state": "enabled", 00:20:39.972 "listen_address": { 00:20:39.972 "trtype": "TCP", 00:20:39.972 "adrfam": "IPv4", 00:20:39.972 "traddr": "10.0.0.2", 00:20:39.972 "trsvcid": "4420" 00:20:39.972 }, 00:20:39.972 "peer_address": { 00:20:39.972 "trtype": "TCP", 00:20:39.972 "adrfam": "IPv4", 00:20:39.972 "traddr": "10.0.0.1", 00:20:39.972 "trsvcid": "54512" 00:20:39.972 }, 00:20:39.972 "auth": { 00:20:39.972 "state": "completed", 00:20:39.972 "digest": "sha384", 00:20:39.972 "dhgroup": "ffdhe3072" 00:20:39.972 } 00:20:39.972 } 00:20:39.972 ]' 00:20:39.973 00:36:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:39.973 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.973 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:39.973 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:39.973 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:39.973 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.973 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.973 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.231 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:20:40.796 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.796 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.796 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:40.796 00:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:40.796 00:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.796 00:36:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:40.796 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:40.796 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.796 00:36:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:41.055 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 2 00:20:41.055 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:41.055 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.055 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:41.055 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:41.055 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:20:41.055 00:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.055 00:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.055 00:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.055 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:41.055 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:41.315 00:20:41.315 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:41.315 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:41.315 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.315 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.315 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.315 00:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:41.315 00:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.574 00:36:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:41.574 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:41.574 { 00:20:41.574 "cntlid": 69, 00:20:41.574 "qid": 0, 00:20:41.574 "state": "enabled", 00:20:41.574 "listen_address": { 00:20:41.574 "trtype": "TCP", 00:20:41.574 "adrfam": "IPv4", 00:20:41.574 "traddr": "10.0.0.2", 00:20:41.574 "trsvcid": "4420" 00:20:41.574 }, 00:20:41.574 "peer_address": { 00:20:41.574 "trtype": "TCP", 00:20:41.574 "adrfam": "IPv4", 00:20:41.574 "traddr": "10.0.0.1", 00:20:41.574 "trsvcid": "54544" 00:20:41.574 }, 00:20:41.574 "auth": { 00:20:41.574 "state": "completed", 00:20:41.574 "digest": "sha384", 00:20:41.574 "dhgroup": "ffdhe3072" 00:20:41.574 } 00:20:41.574 } 00:20:41.574 ]' 00:20:41.574 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:41.574 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.574 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:41.574 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:41.574 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:41.574 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.574 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.574 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.833 00:36:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:20:42.398 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.398 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:42.398 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.398 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.398 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.398 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:42.399 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.399 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:42.399 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe3072 3 00:20:42.399 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:42.399 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:42.399 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:42.399 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:42.399 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:20:42.399 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.399 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.399 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.399 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.399 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:42.658 00:20:42.658 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:42.658 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.658 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:42.918 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.918 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.918 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:42.918 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.918 00:36:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:42.918 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:42.918 { 00:20:42.918 "cntlid": 71, 00:20:42.918 "qid": 0, 00:20:42.918 "state": "enabled", 00:20:42.918 "listen_address": { 00:20:42.918 "trtype": "TCP", 00:20:42.918 "adrfam": "IPv4", 00:20:42.918 "traddr": "10.0.0.2", 00:20:42.918 "trsvcid": "4420" 00:20:42.918 }, 00:20:42.918 "peer_address": { 00:20:42.918 "trtype": "TCP", 00:20:42.919 "adrfam": "IPv4", 00:20:42.919 "traddr": "10.0.0.1", 00:20:42.919 "trsvcid": "50228" 00:20:42.919 }, 00:20:42.919 "auth": { 00:20:42.919 "state": "completed", 00:20:42.919 "digest": "sha384", 00:20:42.919 "dhgroup": "ffdhe3072" 00:20:42.919 } 00:20:42.919 } 00:20:42.919 ]' 00:20:42.919 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:42.919 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.919 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:42.919 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:42.919 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:42.919 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.919 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.919 00:36:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.178 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.747 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 0 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:43.747 00:36:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:44.004 00:20:44.004 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:44.004 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.004 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:44.262 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.262 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.262 00:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:44.262 00:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.262 00:36:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:44.262 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:44.262 { 00:20:44.262 "cntlid": 73, 00:20:44.262 "qid": 0, 00:20:44.262 "state": "enabled", 00:20:44.262 "listen_address": { 00:20:44.262 "trtype": "TCP", 00:20:44.262 "adrfam": "IPv4", 00:20:44.262 "traddr": "10.0.0.2", 00:20:44.262 "trsvcid": "4420" 00:20:44.262 }, 00:20:44.263 "peer_address": { 00:20:44.263 "trtype": "TCP", 00:20:44.263 "adrfam": "IPv4", 00:20:44.263 "traddr": "10.0.0.1", 00:20:44.263 "trsvcid": "50246" 00:20:44.263 }, 00:20:44.263 "auth": { 00:20:44.263 "state": "completed", 00:20:44.263 "digest": "sha384", 00:20:44.263 "dhgroup": "ffdhe4096" 00:20:44.263 } 00:20:44.263 } 00:20:44.263 ]' 00:20:44.263 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:44.263 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.263 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:44.263 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:44.263 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:44.263 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.263 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.263 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.522 00:36:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:20:45.089 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.089 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 1 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:45.090 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:45.348 00:20:45.348 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:45.348 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:45.348 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.606 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.606 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.606 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:45.606 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.606 00:36:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:45.606 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:45.606 { 00:20:45.606 "cntlid": 75, 00:20:45.606 "qid": 0, 00:20:45.606 "state": "enabled", 00:20:45.606 "listen_address": { 00:20:45.606 "trtype": "TCP", 00:20:45.606 "adrfam": "IPv4", 00:20:45.606 "traddr": "10.0.0.2", 00:20:45.607 "trsvcid": "4420" 00:20:45.607 }, 00:20:45.607 "peer_address": { 00:20:45.607 "trtype": "TCP", 00:20:45.607 "adrfam": "IPv4", 00:20:45.607 "traddr": "10.0.0.1", 00:20:45.607 "trsvcid": "50292" 00:20:45.607 }, 00:20:45.607 "auth": { 00:20:45.607 "state": "completed", 00:20:45.607 "digest": "sha384", 00:20:45.607 "dhgroup": "ffdhe4096" 00:20:45.607 } 00:20:45.607 } 00:20:45.607 ]' 00:20:45.607 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:45.607 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.607 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:45.607 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.607 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:45.607 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.607 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.607 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.865 00:36:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:20:46.434 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.434 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:46.434 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.434 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.434 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.434 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:46.434 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.434 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:46.693 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 2 00:20:46.693 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:46.693 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:46.693 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:46.693 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:46.693 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:20:46.693 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.693 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.693 00:36:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.693 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:46.693 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:46.951 00:20:46.951 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:46.951 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:46.951 00:36:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.951 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.951 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.951 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:46.951 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.951 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:46.951 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:46.951 { 00:20:46.951 "cntlid": 77, 00:20:46.951 "qid": 0, 00:20:46.951 "state": "enabled", 00:20:46.951 "listen_address": { 00:20:46.951 "trtype": "TCP", 00:20:46.951 "adrfam": "IPv4", 00:20:46.951 "traddr": "10.0.0.2", 00:20:46.951 "trsvcid": "4420" 00:20:46.951 }, 00:20:46.951 "peer_address": { 00:20:46.951 "trtype": "TCP", 00:20:46.951 "adrfam": "IPv4", 00:20:46.951 "traddr": "10.0.0.1", 00:20:46.951 "trsvcid": "50328" 00:20:46.951 }, 00:20:46.951 "auth": { 00:20:46.951 "state": "completed", 00:20:46.951 "digest": "sha384", 00:20:46.951 "dhgroup": "ffdhe4096" 00:20:46.951 } 00:20:46.951 } 00:20:46.951 ]' 00:20:46.951 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:46.951 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.951 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:47.210 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.210 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:47.210 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.210 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.210 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:47.210 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:20:47.776 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.035 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:48.035 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.035 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.035 00:36:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.035 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:48.035 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:48.035 00:36:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:48.035 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe4096 3 00:20:48.035 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:48.035 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.035 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:48.035 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:48.035 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:20:48.035 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.035 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.035 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.035 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.035 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:48.294 00:20:48.294 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:48.294 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.294 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:48.553 { 00:20:48.553 "cntlid": 79, 00:20:48.553 "qid": 0, 00:20:48.553 "state": "enabled", 00:20:48.553 "listen_address": { 00:20:48.553 "trtype": "TCP", 00:20:48.553 "adrfam": "IPv4", 00:20:48.553 "traddr": "10.0.0.2", 00:20:48.553 "trsvcid": "4420" 00:20:48.553 }, 00:20:48.553 "peer_address": { 00:20:48.553 "trtype": "TCP", 00:20:48.553 "adrfam": "IPv4", 00:20:48.553 "traddr": "10.0.0.1", 00:20:48.553 "trsvcid": "50368" 00:20:48.553 }, 00:20:48.553 "auth": { 00:20:48.553 "state": "completed", 00:20:48.553 "digest": "sha384", 00:20:48.553 "dhgroup": "ffdhe4096" 00:20:48.553 } 00:20:48.553 } 00:20:48.553 ]' 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.553 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.813 00:36:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 0 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:49.381 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:49.948 00:20:49.948 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:49.948 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.948 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:49.948 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.948 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.948 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:49.948 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.948 00:36:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:49.948 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:49.948 { 00:20:49.948 "cntlid": 81, 00:20:49.948 "qid": 0, 00:20:49.948 "state": "enabled", 00:20:49.948 "listen_address": { 00:20:49.948 "trtype": "TCP", 00:20:49.948 "adrfam": "IPv4", 00:20:49.948 "traddr": "10.0.0.2", 00:20:49.948 "trsvcid": "4420" 00:20:49.948 }, 00:20:49.948 "peer_address": { 00:20:49.948 "trtype": "TCP", 00:20:49.948 "adrfam": "IPv4", 00:20:49.948 "traddr": "10.0.0.1", 00:20:49.948 "trsvcid": "50396" 00:20:49.948 }, 00:20:49.948 "auth": { 00:20:49.948 "state": "completed", 00:20:49.948 "digest": "sha384", 00:20:49.948 "dhgroup": "ffdhe6144" 00:20:49.948 } 00:20:49.948 } 00:20:49.948 ]' 00:20:49.948 00:36:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:49.948 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.948 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:49.948 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:49.948 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:49.948 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.948 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.948 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.208 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:20:50.777 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.777 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:50.777 00:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:50.777 00:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.777 00:36:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:50.777 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:50.777 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:50.777 00:36:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:51.036 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 1 00:20:51.036 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:51.036 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:51.036 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:51.036 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:51.036 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:20:51.036 00:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.036 00:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.036 00:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.036 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:51.036 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:51.295 00:20:51.295 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:51.295 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:51.295 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:51.554 { 00:20:51.554 "cntlid": 83, 00:20:51.554 "qid": 0, 00:20:51.554 "state": "enabled", 00:20:51.554 "listen_address": { 00:20:51.554 "trtype": "TCP", 00:20:51.554 "adrfam": "IPv4", 00:20:51.554 "traddr": "10.0.0.2", 00:20:51.554 "trsvcid": "4420" 00:20:51.554 }, 00:20:51.554 "peer_address": { 00:20:51.554 "trtype": "TCP", 00:20:51.554 "adrfam": "IPv4", 00:20:51.554 "traddr": "10.0.0.1", 00:20:51.554 "trsvcid": "50428" 00:20:51.554 }, 00:20:51.554 "auth": { 00:20:51.554 "state": "completed", 00:20:51.554 "digest": "sha384", 00:20:51.554 "dhgroup": "ffdhe6144" 00:20:51.554 } 00:20:51.554 } 00:20:51.554 ]' 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.554 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.814 00:36:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 2 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.384 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.385 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.385 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:52.385 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:52.951 00:20:52.951 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:52.951 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.951 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:52.951 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.951 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.951 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:52.951 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.951 00:36:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:52.951 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:52.951 { 00:20:52.951 "cntlid": 85, 00:20:52.951 "qid": 0, 00:20:52.951 "state": "enabled", 00:20:52.951 "listen_address": { 00:20:52.951 "trtype": "TCP", 00:20:52.951 "adrfam": "IPv4", 00:20:52.951 "traddr": "10.0.0.2", 00:20:52.951 "trsvcid": "4420" 00:20:52.951 }, 00:20:52.951 "peer_address": { 00:20:52.951 "trtype": "TCP", 00:20:52.951 "adrfam": "IPv4", 00:20:52.951 "traddr": "10.0.0.1", 00:20:52.951 "trsvcid": "43072" 00:20:52.951 }, 00:20:52.951 "auth": { 00:20:52.951 "state": "completed", 00:20:52.951 "digest": "sha384", 00:20:52.951 "dhgroup": "ffdhe6144" 00:20:52.951 } 00:20:52.951 } 00:20:52.951 ]' 00:20:52.951 00:36:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:52.951 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.951 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:52.951 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:52.951 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:52.951 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.951 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.951 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.208 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:20:53.775 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.775 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:53.775 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:53.775 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.775 00:36:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:53.775 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:53.775 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:53.775 00:36:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.035 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe6144 3 00:20:54.035 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:54.035 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:54.035 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:54.035 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:54.035 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:20:54.035 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.035 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.035 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.035 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.035 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:54.295 00:20:54.295 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:54.295 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.295 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:54.554 { 00:20:54.554 "cntlid": 87, 00:20:54.554 "qid": 0, 00:20:54.554 "state": "enabled", 00:20:54.554 "listen_address": { 00:20:54.554 "trtype": "TCP", 00:20:54.554 "adrfam": "IPv4", 00:20:54.554 "traddr": "10.0.0.2", 00:20:54.554 "trsvcid": "4420" 00:20:54.554 }, 00:20:54.554 "peer_address": { 00:20:54.554 "trtype": "TCP", 00:20:54.554 "adrfam": "IPv4", 00:20:54.554 "traddr": "10.0.0.1", 00:20:54.554 "trsvcid": "43106" 00:20:54.554 }, 00:20:54.554 "auth": { 00:20:54.554 "state": "completed", 00:20:54.554 "digest": "sha384", 00:20:54.554 "dhgroup": "ffdhe6144" 00:20:54.554 } 00:20:54.554 } 00:20:54.554 ]' 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.554 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.812 00:36:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:20:55.379 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.379 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.379 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:55.379 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.379 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.379 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.379 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:20:55.379 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:55.379 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.379 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:55.379 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 0 00:20:55.379 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:55.380 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:55.380 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:55.380 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:55.380 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:20:55.380 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:55.380 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.380 00:36:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:55.380 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:55.380 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:55.950 00:20:55.950 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:55.950 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:55.950 00:36:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:56.210 { 00:20:56.210 "cntlid": 89, 00:20:56.210 "qid": 0, 00:20:56.210 "state": "enabled", 00:20:56.210 "listen_address": { 00:20:56.210 "trtype": "TCP", 00:20:56.210 "adrfam": "IPv4", 00:20:56.210 "traddr": "10.0.0.2", 00:20:56.210 "trsvcid": "4420" 00:20:56.210 }, 00:20:56.210 "peer_address": { 00:20:56.210 "trtype": "TCP", 00:20:56.210 "adrfam": "IPv4", 00:20:56.210 "traddr": "10.0.0.1", 00:20:56.210 "trsvcid": "43124" 00:20:56.210 }, 00:20:56.210 "auth": { 00:20:56.210 "state": "completed", 00:20:56.210 "digest": "sha384", 00:20:56.210 "dhgroup": "ffdhe8192" 00:20:56.210 } 00:20:56.210 } 00:20:56.210 ]' 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.210 00:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.468 00:36:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:57.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 1 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:57.035 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:20:57.605 00:20:57.605 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:57.605 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.605 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:57.865 { 00:20:57.865 "cntlid": 91, 00:20:57.865 "qid": 0, 00:20:57.865 "state": "enabled", 00:20:57.865 "listen_address": { 00:20:57.865 "trtype": "TCP", 00:20:57.865 "adrfam": "IPv4", 00:20:57.865 "traddr": "10.0.0.2", 00:20:57.865 "trsvcid": "4420" 00:20:57.865 }, 00:20:57.865 "peer_address": { 00:20:57.865 "trtype": "TCP", 00:20:57.865 "adrfam": "IPv4", 00:20:57.865 "traddr": "10.0.0.1", 00:20:57.865 "trsvcid": "43154" 00:20:57.865 }, 00:20:57.865 "auth": { 00:20:57.865 "state": "completed", 00:20:57.865 "digest": "sha384", 00:20:57.865 "dhgroup": "ffdhe8192" 00:20:57.865 } 00:20:57.865 } 00:20:57.865 ]' 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.865 00:36:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.123 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 2 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:58.689 00:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.949 00:36:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:58.949 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:58.949 00:36:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:59.208 00:20:59.208 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:20:59.208 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:20:59.208 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:20:59.466 { 00:20:59.466 "cntlid": 93, 00:20:59.466 "qid": 0, 00:20:59.466 "state": "enabled", 00:20:59.466 "listen_address": { 00:20:59.466 "trtype": "TCP", 00:20:59.466 "adrfam": "IPv4", 00:20:59.466 "traddr": "10.0.0.2", 00:20:59.466 "trsvcid": "4420" 00:20:59.466 }, 00:20:59.466 "peer_address": { 00:20:59.466 "trtype": "TCP", 00:20:59.466 "adrfam": "IPv4", 00:20:59.466 "traddr": "10.0.0.1", 00:20:59.466 "trsvcid": "43198" 00:20:59.466 }, 00:20:59.466 "auth": { 00:20:59.466 "state": "completed", 00:20:59.466 "digest": "sha384", 00:20:59.466 "dhgroup": "ffdhe8192" 00:20:59.466 } 00:20:59.466 } 00:20:59.466 ]' 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.466 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.726 00:36:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:21:00.292 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.292 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.292 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:00.292 00:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.292 00:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.292 00:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.292 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:00.292 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.292 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:00.550 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha384 ffdhe8192 3 00:21:00.550 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:00.550 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:00.550 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:00.550 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:00.550 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:21:00.550 00:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:00.550 00:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.550 00:36:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:00.550 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.550 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:00.809 00:21:00.809 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:00.809 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:00.809 00:36:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:01.075 { 00:21:01.075 "cntlid": 95, 00:21:01.075 "qid": 0, 00:21:01.075 "state": "enabled", 00:21:01.075 "listen_address": { 00:21:01.075 "trtype": "TCP", 00:21:01.075 "adrfam": "IPv4", 00:21:01.075 "traddr": "10.0.0.2", 00:21:01.075 "trsvcid": "4420" 00:21:01.075 }, 00:21:01.075 "peer_address": { 00:21:01.075 "trtype": "TCP", 00:21:01.075 "adrfam": "IPv4", 00:21:01.075 "traddr": "10.0.0.1", 00:21:01.075 "trsvcid": "43224" 00:21:01.075 }, 00:21:01.075 "auth": { 00:21:01.075 "state": "completed", 00:21:01.075 "digest": "sha384", 00:21:01.075 "dhgroup": "ffdhe8192" 00:21:01.075 } 00:21:01.075 } 00:21:01.075 ]' 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.075 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.391 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:21:01.958 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.958 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.958 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:01.958 00:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.958 00:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.958 00:36:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:01.958 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # for digest in "${digests[@]}" 00:21:01.958 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.958 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:01.958 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.958 00:36:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:01.958 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 0 00:21:01.958 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:01.958 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:01.958 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:01.958 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:01.958 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:21:01.958 00:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:01.958 00:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.216 00:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:02.216 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:02.216 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:02.216 00:21:02.216 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:02.216 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.216 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:02.474 { 00:21:02.474 "cntlid": 97, 00:21:02.474 "qid": 0, 00:21:02.474 "state": "enabled", 00:21:02.474 "listen_address": { 00:21:02.474 "trtype": "TCP", 00:21:02.474 "adrfam": "IPv4", 00:21:02.474 "traddr": "10.0.0.2", 00:21:02.474 "trsvcid": "4420" 00:21:02.474 }, 00:21:02.474 "peer_address": { 00:21:02.474 "trtype": "TCP", 00:21:02.474 "adrfam": "IPv4", 00:21:02.474 "traddr": "10.0.0.1", 00:21:02.474 "trsvcid": "38280" 00:21:02.474 }, 00:21:02.474 "auth": { 00:21:02.474 "state": "completed", 00:21:02.474 "digest": "sha512", 00:21:02.474 "dhgroup": "null" 00:21:02.474 } 00:21:02.474 } 00:21:02.474 ]' 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.474 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.733 00:36:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:21:03.303 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.303 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:03.303 00:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.303 00:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.303 00:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.303 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:03.303 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:03.303 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 1 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:03.562 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:03.562 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:03.820 { 00:21:03.820 "cntlid": 99, 00:21:03.820 "qid": 0, 00:21:03.820 "state": "enabled", 00:21:03.820 "listen_address": { 00:21:03.820 "trtype": "TCP", 00:21:03.820 "adrfam": "IPv4", 00:21:03.820 "traddr": "10.0.0.2", 00:21:03.820 "trsvcid": "4420" 00:21:03.820 }, 00:21:03.820 "peer_address": { 00:21:03.820 "trtype": "TCP", 00:21:03.820 "adrfam": "IPv4", 00:21:03.820 "traddr": "10.0.0.1", 00:21:03.820 "trsvcid": "38288" 00:21:03.820 }, 00:21:03.820 "auth": { 00:21:03.820 "state": "completed", 00:21:03.820 "digest": "sha512", 00:21:03.820 "dhgroup": "null" 00:21:03.820 } 00:21:03.820 } 00:21:03.820 ]' 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.820 00:36:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.077 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:21:04.646 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.646 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:04.646 00:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.646 00:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.646 00:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.646 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:04.646 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.646 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:04.906 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 2 00:21:04.906 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:04.906 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:04.906 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:04.906 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:04.906 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:21:04.906 00:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:04.906 00:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.906 00:36:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:04.906 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:04.906 00:36:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:05.166 00:21:05.166 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:05.166 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:05.166 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.166 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.166 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.166 00:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:05.166 00:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.166 00:36:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:05.166 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:05.166 { 00:21:05.166 "cntlid": 101, 00:21:05.166 "qid": 0, 00:21:05.166 "state": "enabled", 00:21:05.166 "listen_address": { 00:21:05.166 "trtype": "TCP", 00:21:05.166 "adrfam": "IPv4", 00:21:05.166 "traddr": "10.0.0.2", 00:21:05.166 "trsvcid": "4420" 00:21:05.166 }, 00:21:05.166 "peer_address": { 00:21:05.166 "trtype": "TCP", 00:21:05.166 "adrfam": "IPv4", 00:21:05.166 "traddr": "10.0.0.1", 00:21:05.166 "trsvcid": "38312" 00:21:05.166 }, 00:21:05.166 "auth": { 00:21:05.166 "state": "completed", 00:21:05.166 "digest": "sha512", 00:21:05.166 "dhgroup": "null" 00:21:05.166 } 00:21:05.166 } 00:21:05.166 ]' 00:21:05.166 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:05.424 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:05.424 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:05.424 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:05.424 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:05.424 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.424 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.425 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.425 00:36:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 null 3 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.360 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:06.620 00:21:06.620 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:06.620 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:06.620 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.620 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.620 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.620 00:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:06.620 00:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.620 00:36:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:06.620 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:06.620 { 00:21:06.620 "cntlid": 103, 00:21:06.620 "qid": 0, 00:21:06.620 "state": "enabled", 00:21:06.620 "listen_address": { 00:21:06.620 "trtype": "TCP", 00:21:06.620 "adrfam": "IPv4", 00:21:06.620 "traddr": "10.0.0.2", 00:21:06.620 "trsvcid": "4420" 00:21:06.620 }, 00:21:06.620 "peer_address": { 00:21:06.620 "trtype": "TCP", 00:21:06.620 "adrfam": "IPv4", 00:21:06.620 "traddr": "10.0.0.1", 00:21:06.620 "trsvcid": "38340" 00:21:06.620 }, 00:21:06.620 "auth": { 00:21:06.620 "state": "completed", 00:21:06.620 "digest": "sha512", 00:21:06.620 "dhgroup": "null" 00:21:06.620 } 00:21:06.620 } 00:21:06.620 ]' 00:21:06.620 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:06.620 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.620 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:06.878 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ null == \n\u\l\l ]] 00:21:06.878 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:06.878 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.878 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.878 00:36:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.878 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:21:07.444 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 0 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:07.702 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:07.961 00:21:07.961 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:07.961 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:07.961 00:36:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.961 00:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.961 00:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.961 00:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:07.961 00:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.222 00:36:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:08.222 00:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:08.222 { 00:21:08.222 "cntlid": 105, 00:21:08.222 "qid": 0, 00:21:08.222 "state": "enabled", 00:21:08.222 "listen_address": { 00:21:08.222 "trtype": "TCP", 00:21:08.222 "adrfam": "IPv4", 00:21:08.222 "traddr": "10.0.0.2", 00:21:08.222 "trsvcid": "4420" 00:21:08.222 }, 00:21:08.222 "peer_address": { 00:21:08.222 "trtype": "TCP", 00:21:08.222 "adrfam": "IPv4", 00:21:08.222 "traddr": "10.0.0.1", 00:21:08.222 "trsvcid": "38350" 00:21:08.222 }, 00:21:08.222 "auth": { 00:21:08.222 "state": "completed", 00:21:08.222 "digest": "sha512", 00:21:08.222 "dhgroup": "ffdhe2048" 00:21:08.222 } 00:21:08.222 } 00:21:08.222 ]' 00:21:08.222 00:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:08.222 00:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.222 00:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:08.222 00:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:08.222 00:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:08.222 00:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.222 00:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.222 00:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.481 00:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:21:09.048 00:36:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 1 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:09.048 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:09.306 00:21:09.306 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:09.306 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.306 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:09.564 { 00:21:09.564 "cntlid": 107, 00:21:09.564 "qid": 0, 00:21:09.564 "state": "enabled", 00:21:09.564 "listen_address": { 00:21:09.564 "trtype": "TCP", 00:21:09.564 "adrfam": "IPv4", 00:21:09.564 "traddr": "10.0.0.2", 00:21:09.564 "trsvcid": "4420" 00:21:09.564 }, 00:21:09.564 "peer_address": { 00:21:09.564 "trtype": "TCP", 00:21:09.564 "adrfam": "IPv4", 00:21:09.564 "traddr": "10.0.0.1", 00:21:09.564 "trsvcid": "38370" 00:21:09.564 }, 00:21:09.564 "auth": { 00:21:09.564 "state": "completed", 00:21:09.564 "digest": "sha512", 00:21:09.564 "dhgroup": "ffdhe2048" 00:21:09.564 } 00:21:09.564 } 00:21:09.564 ]' 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.564 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.823 00:36:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:21:10.391 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.391 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:10.391 00:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.391 00:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.391 00:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.391 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:10.391 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.392 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 2 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:10.652 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:10.652 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.910 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.910 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.910 00:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:10.911 00:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.911 00:36:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:10.911 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:10.911 { 00:21:10.911 "cntlid": 109, 00:21:10.911 "qid": 0, 00:21:10.911 "state": "enabled", 00:21:10.911 "listen_address": { 00:21:10.911 "trtype": "TCP", 00:21:10.911 "adrfam": "IPv4", 00:21:10.911 "traddr": "10.0.0.2", 00:21:10.911 "trsvcid": "4420" 00:21:10.911 }, 00:21:10.911 "peer_address": { 00:21:10.911 "trtype": "TCP", 00:21:10.911 "adrfam": "IPv4", 00:21:10.911 "traddr": "10.0.0.1", 00:21:10.911 "trsvcid": "38390" 00:21:10.911 }, 00:21:10.911 "auth": { 00:21:10.911 "state": "completed", 00:21:10.911 "digest": "sha512", 00:21:10.911 "dhgroup": "ffdhe2048" 00:21:10.911 } 00:21:10.911 } 00:21:10.911 ]' 00:21:10.911 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:10.911 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.911 00:36:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:10.911 00:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:10.911 00:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:10.911 00:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.911 00:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.911 00:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.168 00:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:21:11.736 00:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.995 00:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:11.995 00:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.995 00:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.995 00:36:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.995 00:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:11.995 00:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.995 00:36:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:11.995 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe2048 3 00:21:11.995 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:11.995 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:11.995 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:11.995 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:11.995 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:21:11.995 00:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:11.995 00:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.995 00:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:11.995 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:11.995 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.254 00:21:12.254 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:12.254 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.254 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:12.254 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.514 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.514 00:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:12.514 00:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.514 00:36:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:12.514 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:12.514 { 00:21:12.514 "cntlid": 111, 00:21:12.514 "qid": 0, 00:21:12.514 "state": "enabled", 00:21:12.514 "listen_address": { 00:21:12.514 "trtype": "TCP", 00:21:12.514 "adrfam": "IPv4", 00:21:12.514 "traddr": "10.0.0.2", 00:21:12.514 "trsvcid": "4420" 00:21:12.514 }, 00:21:12.514 "peer_address": { 00:21:12.514 "trtype": "TCP", 00:21:12.514 "adrfam": "IPv4", 00:21:12.514 "traddr": "10.0.0.1", 00:21:12.514 "trsvcid": "37702" 00:21:12.514 }, 00:21:12.514 "auth": { 00:21:12.514 "state": "completed", 00:21:12.514 "digest": "sha512", 00:21:12.514 "dhgroup": "ffdhe2048" 00:21:12.514 } 00:21:12.514 } 00:21:12.514 ]' 00:21:12.514 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:12.514 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.514 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:12.514 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:12.515 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:12.515 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.515 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.515 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.773 00:36:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.340 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 0 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:13.340 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:13.600 00:21:13.600 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:13.600 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:13.600 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:13.860 { 00:21:13.860 "cntlid": 113, 00:21:13.860 "qid": 0, 00:21:13.860 "state": "enabled", 00:21:13.860 "listen_address": { 00:21:13.860 "trtype": "TCP", 00:21:13.860 "adrfam": "IPv4", 00:21:13.860 "traddr": "10.0.0.2", 00:21:13.860 "trsvcid": "4420" 00:21:13.860 }, 00:21:13.860 "peer_address": { 00:21:13.860 "trtype": "TCP", 00:21:13.860 "adrfam": "IPv4", 00:21:13.860 "traddr": "10.0.0.1", 00:21:13.860 "trsvcid": "37732" 00:21:13.860 }, 00:21:13.860 "auth": { 00:21:13.860 "state": "completed", 00:21:13.860 "digest": "sha512", 00:21:13.860 "dhgroup": "ffdhe3072" 00:21:13.860 } 00:21:13.860 } 00:21:13.860 ]' 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.860 00:36:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.119 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:21:14.689 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.689 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:14.689 00:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.689 00:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.689 00:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.689 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:14.690 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.690 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.949 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 1 00:21:14.949 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:14.949 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:14.949 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:14.949 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:14.949 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:21:14.949 00:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:14.949 00:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.949 00:36:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:14.949 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:14.949 00:36:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:14.949 00:21:14.949 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:14.949 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:14.949 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:15.208 { 00:21:15.208 "cntlid": 115, 00:21:15.208 "qid": 0, 00:21:15.208 "state": "enabled", 00:21:15.208 "listen_address": { 00:21:15.208 "trtype": "TCP", 00:21:15.208 "adrfam": "IPv4", 00:21:15.208 "traddr": "10.0.0.2", 00:21:15.208 "trsvcid": "4420" 00:21:15.208 }, 00:21:15.208 "peer_address": { 00:21:15.208 "trtype": "TCP", 00:21:15.208 "adrfam": "IPv4", 00:21:15.208 "traddr": "10.0.0.1", 00:21:15.208 "trsvcid": "37764" 00:21:15.208 }, 00:21:15.208 "auth": { 00:21:15.208 "state": "completed", 00:21:15.208 "digest": "sha512", 00:21:15.208 "dhgroup": "ffdhe3072" 00:21:15.208 } 00:21:15.208 } 00:21:15.208 ]' 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.208 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.468 00:36:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:21:16.038 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.038 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.038 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:16.038 00:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:16.038 00:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.038 00:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:16.038 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:16.038 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:16.038 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:16.297 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 2 00:21:16.297 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:16.297 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.297 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:16.297 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:16.298 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:21:16.298 00:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:16.298 00:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.298 00:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:16.298 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:16.298 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:16.556 00:21:16.556 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:16.556 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:16.556 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.556 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.556 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.556 00:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:16.556 00:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.556 00:36:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:16.556 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:16.556 { 00:21:16.556 "cntlid": 117, 00:21:16.556 "qid": 0, 00:21:16.556 "state": "enabled", 00:21:16.556 "listen_address": { 00:21:16.556 "trtype": "TCP", 00:21:16.556 "adrfam": "IPv4", 00:21:16.556 "traddr": "10.0.0.2", 00:21:16.556 "trsvcid": "4420" 00:21:16.556 }, 00:21:16.556 "peer_address": { 00:21:16.556 "trtype": "TCP", 00:21:16.556 "adrfam": "IPv4", 00:21:16.556 "traddr": "10.0.0.1", 00:21:16.556 "trsvcid": "37780" 00:21:16.556 }, 00:21:16.556 "auth": { 00:21:16.556 "state": "completed", 00:21:16.556 "digest": "sha512", 00:21:16.556 "dhgroup": "ffdhe3072" 00:21:16.556 } 00:21:16.556 } 00:21:16.556 ]' 00:21:16.556 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:16.556 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.556 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:16.814 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:16.814 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:16.814 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.814 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.814 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.814 00:36:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:21:17.381 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe3072 3 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.637 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:17.895 00:21:17.895 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:17.895 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:17.895 00:36:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.895 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.154 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.154 00:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.154 00:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.154 00:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.154 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:18.154 { 00:21:18.154 "cntlid": 119, 00:21:18.154 "qid": 0, 00:21:18.154 "state": "enabled", 00:21:18.154 "listen_address": { 00:21:18.154 "trtype": "TCP", 00:21:18.154 "adrfam": "IPv4", 00:21:18.154 "traddr": "10.0.0.2", 00:21:18.154 "trsvcid": "4420" 00:21:18.154 }, 00:21:18.154 "peer_address": { 00:21:18.154 "trtype": "TCP", 00:21:18.154 "adrfam": "IPv4", 00:21:18.154 "traddr": "10.0.0.1", 00:21:18.154 "trsvcid": "37816" 00:21:18.154 }, 00:21:18.154 "auth": { 00:21:18.154 "state": "completed", 00:21:18.154 "digest": "sha512", 00:21:18.154 "dhgroup": "ffdhe3072" 00:21:18.154 } 00:21:18.154 } 00:21:18.154 ]' 00:21:18.154 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:18.154 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.154 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:18.154 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:18.154 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:18.154 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.154 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.154 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.413 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:21:18.979 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.979 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:18.979 00:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.979 00:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.979 00:36:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.979 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.979 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:18.979 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.980 00:36:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.980 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 0 00:21:18.980 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:18.980 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:18.980 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:18.980 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:18.980 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:21:18.980 00:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:18.980 00:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.980 00:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:18.980 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:18.980 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:19.237 00:21:19.237 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:19.237 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:19.237 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.494 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.494 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.494 00:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:19.494 00:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.494 00:36:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:19.494 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:19.494 { 00:21:19.494 "cntlid": 121, 00:21:19.494 "qid": 0, 00:21:19.494 "state": "enabled", 00:21:19.494 "listen_address": { 00:21:19.494 "trtype": "TCP", 00:21:19.494 "adrfam": "IPv4", 00:21:19.494 "traddr": "10.0.0.2", 00:21:19.494 "trsvcid": "4420" 00:21:19.494 }, 00:21:19.494 "peer_address": { 00:21:19.494 "trtype": "TCP", 00:21:19.494 "adrfam": "IPv4", 00:21:19.494 "traddr": "10.0.0.1", 00:21:19.494 "trsvcid": "37858" 00:21:19.494 }, 00:21:19.494 "auth": { 00:21:19.494 "state": "completed", 00:21:19.494 "digest": "sha512", 00:21:19.494 "dhgroup": "ffdhe4096" 00:21:19.494 } 00:21:19.494 } 00:21:19.494 ]' 00:21:19.494 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:19.494 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.494 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:19.494 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.495 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:19.495 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.495 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.495 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.752 00:36:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:21:20.321 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.321 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:20.321 00:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.321 00:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.321 00:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.321 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:20.321 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.321 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:20.580 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 1 00:21:20.580 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:20.580 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:20.580 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:20.580 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:20.580 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:21:20.580 00:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.580 00:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.580 00:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.580 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:20.580 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:20.838 00:21:20.838 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:20.838 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:20.838 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.838 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.838 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.838 00:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:20.838 00:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.838 00:36:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:20.838 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:20.838 { 00:21:20.838 "cntlid": 123, 00:21:20.838 "qid": 0, 00:21:20.838 "state": "enabled", 00:21:20.838 "listen_address": { 00:21:20.838 "trtype": "TCP", 00:21:20.838 "adrfam": "IPv4", 00:21:20.838 "traddr": "10.0.0.2", 00:21:20.838 "trsvcid": "4420" 00:21:20.838 }, 00:21:20.838 "peer_address": { 00:21:20.838 "trtype": "TCP", 00:21:20.838 "adrfam": "IPv4", 00:21:20.838 "traddr": "10.0.0.1", 00:21:20.838 "trsvcid": "37890" 00:21:20.838 }, 00:21:20.838 "auth": { 00:21:20.838 "state": "completed", 00:21:20.838 "digest": "sha512", 00:21:20.839 "dhgroup": "ffdhe4096" 00:21:20.839 } 00:21:20.839 } 00:21:20.839 ]' 00:21:20.839 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:20.839 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.839 00:36:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:21.095 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:21.095 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:21.095 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.095 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.095 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.095 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:21:21.660 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.660 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:21.660 00:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.660 00:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.660 00:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.660 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:21.660 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.660 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.919 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 2 00:21:21.919 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:21.919 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.919 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:21.919 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:21.919 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:21:21.919 00:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:21.919 00:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.919 00:36:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:21.919 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:21.919 00:36:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:22.178 00:21:22.178 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:22.178 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:22.178 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:22.438 { 00:21:22.438 "cntlid": 125, 00:21:22.438 "qid": 0, 00:21:22.438 "state": "enabled", 00:21:22.438 "listen_address": { 00:21:22.438 "trtype": "TCP", 00:21:22.438 "adrfam": "IPv4", 00:21:22.438 "traddr": "10.0.0.2", 00:21:22.438 "trsvcid": "4420" 00:21:22.438 }, 00:21:22.438 "peer_address": { 00:21:22.438 "trtype": "TCP", 00:21:22.438 "adrfam": "IPv4", 00:21:22.438 "traddr": "10.0.0.1", 00:21:22.438 "trsvcid": "60342" 00:21:22.438 }, 00:21:22.438 "auth": { 00:21:22.438 "state": "completed", 00:21:22.438 "digest": "sha512", 00:21:22.438 "dhgroup": "ffdhe4096" 00:21:22.438 } 00:21:22.438 } 00:21:22.438 ]' 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.438 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.696 00:36:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe4096 3 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.262 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.521 00:21:23.521 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:23.521 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:23.521 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:23.781 { 00:21:23.781 "cntlid": 127, 00:21:23.781 "qid": 0, 00:21:23.781 "state": "enabled", 00:21:23.781 "listen_address": { 00:21:23.781 "trtype": "TCP", 00:21:23.781 "adrfam": "IPv4", 00:21:23.781 "traddr": "10.0.0.2", 00:21:23.781 "trsvcid": "4420" 00:21:23.781 }, 00:21:23.781 "peer_address": { 00:21:23.781 "trtype": "TCP", 00:21:23.781 "adrfam": "IPv4", 00:21:23.781 "traddr": "10.0.0.1", 00:21:23.781 "trsvcid": "60370" 00:21:23.781 }, 00:21:23.781 "auth": { 00:21:23.781 "state": "completed", 00:21:23.781 "digest": "sha512", 00:21:23.781 "dhgroup": "ffdhe4096" 00:21:23.781 } 00:21:23.781 } 00:21:23.781 ]' 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.781 00:36:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.041 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 0 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:24.607 00:36:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:25.247 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:25.247 { 00:21:25.247 "cntlid": 129, 00:21:25.247 "qid": 0, 00:21:25.247 "state": "enabled", 00:21:25.247 "listen_address": { 00:21:25.247 "trtype": "TCP", 00:21:25.247 "adrfam": "IPv4", 00:21:25.247 "traddr": "10.0.0.2", 00:21:25.247 "trsvcid": "4420" 00:21:25.247 }, 00:21:25.247 "peer_address": { 00:21:25.247 "trtype": "TCP", 00:21:25.247 "adrfam": "IPv4", 00:21:25.247 "traddr": "10.0.0.1", 00:21:25.247 "trsvcid": "60398" 00:21:25.247 }, 00:21:25.247 "auth": { 00:21:25.247 "state": "completed", 00:21:25.247 "digest": "sha512", 00:21:25.247 "dhgroup": "ffdhe6144" 00:21:25.247 } 00:21:25.247 } 00:21:25.247 ]' 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.247 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.526 00:36:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 1 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:26.095 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:26.354 00:21:26.354 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:26.354 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:26.354 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:26.613 { 00:21:26.613 "cntlid": 131, 00:21:26.613 "qid": 0, 00:21:26.613 "state": "enabled", 00:21:26.613 "listen_address": { 00:21:26.613 "trtype": "TCP", 00:21:26.613 "adrfam": "IPv4", 00:21:26.613 "traddr": "10.0.0.2", 00:21:26.613 "trsvcid": "4420" 00:21:26.613 }, 00:21:26.613 "peer_address": { 00:21:26.613 "trtype": "TCP", 00:21:26.613 "adrfam": "IPv4", 00:21:26.613 "traddr": "10.0.0.1", 00:21:26.613 "trsvcid": "60428" 00:21:26.613 }, 00:21:26.613 "auth": { 00:21:26.613 "state": "completed", 00:21:26.613 "digest": "sha512", 00:21:26.613 "dhgroup": "ffdhe6144" 00:21:26.613 } 00:21:26.613 } 00:21:26.613 ]' 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:26.613 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.871 00:36:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:21:27.442 00:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:27.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:27.442 00:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:27.442 00:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:27.442 00:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.442 00:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:27.442 00:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:27.443 00:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.443 00:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:27.701 00:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 2 00:21:27.701 00:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:27.701 00:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:27.701 00:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:27.701 00:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:27.701 00:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:21:27.701 00:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:27.701 00:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.701 00:36:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:27.701 00:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:27.701 00:36:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:27.960 00:21:27.960 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:27.960 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:27.960 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:28.217 { 00:21:28.217 "cntlid": 133, 00:21:28.217 "qid": 0, 00:21:28.217 "state": "enabled", 00:21:28.217 "listen_address": { 00:21:28.217 "trtype": "TCP", 00:21:28.217 "adrfam": "IPv4", 00:21:28.217 "traddr": "10.0.0.2", 00:21:28.217 "trsvcid": "4420" 00:21:28.217 }, 00:21:28.217 "peer_address": { 00:21:28.217 "trtype": "TCP", 00:21:28.217 "adrfam": "IPv4", 00:21:28.217 "traddr": "10.0.0.1", 00:21:28.217 "trsvcid": "60456" 00:21:28.217 }, 00:21:28.217 "auth": { 00:21:28.217 "state": "completed", 00:21:28.217 "digest": "sha512", 00:21:28.217 "dhgroup": "ffdhe6144" 00:21:28.217 } 00:21:28.217 } 00:21:28.217 ]' 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.217 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.476 00:36:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:21:29.044 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.044 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:29.044 00:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:29.044 00:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.044 00:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:29.044 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:29.044 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.044 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:29.303 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe6144 3 00:21:29.303 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:29.303 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:29.303 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:29.303 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:29.303 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:21:29.303 00:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:29.303 00:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.303 00:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:29.303 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.303 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:29.561 00:21:29.561 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:29.561 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:29.561 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.561 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.561 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.561 00:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:29.561 00:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.821 00:36:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:29.821 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:29.821 { 00:21:29.821 "cntlid": 135, 00:21:29.821 "qid": 0, 00:21:29.821 "state": "enabled", 00:21:29.821 "listen_address": { 00:21:29.821 "trtype": "TCP", 00:21:29.821 "adrfam": "IPv4", 00:21:29.821 "traddr": "10.0.0.2", 00:21:29.821 "trsvcid": "4420" 00:21:29.821 }, 00:21:29.821 "peer_address": { 00:21:29.821 "trtype": "TCP", 00:21:29.821 "adrfam": "IPv4", 00:21:29.821 "traddr": "10.0.0.1", 00:21:29.821 "trsvcid": "60488" 00:21:29.821 }, 00:21:29.821 "auth": { 00:21:29.821 "state": "completed", 00:21:29.821 "digest": "sha512", 00:21:29.821 "dhgroup": "ffdhe6144" 00:21:29.821 } 00:21:29.821 } 00:21:29.821 ]' 00:21:29.821 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:29.821 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.821 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:29.821 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.821 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:29.821 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.821 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.821 00:36:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.080 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # for dhgroup in "${dhgroups[@]}" 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 0 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:30.646 00:36:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:31.217 00:21:31.217 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:31.217 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:31.217 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.217 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.217 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.217 00:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:31.217 00:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.478 00:36:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:31.478 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:31.478 { 00:21:31.478 "cntlid": 137, 00:21:31.478 "qid": 0, 00:21:31.478 "state": "enabled", 00:21:31.478 "listen_address": { 00:21:31.478 "trtype": "TCP", 00:21:31.478 "adrfam": "IPv4", 00:21:31.478 "traddr": "10.0.0.2", 00:21:31.478 "trsvcid": "4420" 00:21:31.478 }, 00:21:31.478 "peer_address": { 00:21:31.478 "trtype": "TCP", 00:21:31.478 "adrfam": "IPv4", 00:21:31.478 "traddr": "10.0.0.1", 00:21:31.478 "trsvcid": "60506" 00:21:31.478 }, 00:21:31.478 "auth": { 00:21:31.478 "state": "completed", 00:21:31.478 "digest": "sha512", 00:21:31.478 "dhgroup": "ffdhe8192" 00:21:31.478 } 00:21:31.478 } 00:21:31.478 ]' 00:21:31.478 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:31.478 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.479 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:31.479 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:31.479 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:31.479 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.479 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.479 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.479 00:36:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 1 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:32.414 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 00:21:32.983 00:21:32.983 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:32.983 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:32.983 00:36:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.983 00:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.983 00:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.983 00:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:32.983 00:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.983 00:36:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:32.983 00:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:32.983 { 00:21:32.983 "cntlid": 139, 00:21:32.983 "qid": 0, 00:21:32.983 "state": "enabled", 00:21:32.983 "listen_address": { 00:21:32.983 "trtype": "TCP", 00:21:32.983 "adrfam": "IPv4", 00:21:32.983 "traddr": "10.0.0.2", 00:21:32.983 "trsvcid": "4420" 00:21:32.983 }, 00:21:32.983 "peer_address": { 00:21:32.983 "trtype": "TCP", 00:21:32.983 "adrfam": "IPv4", 00:21:32.983 "traddr": "10.0.0.1", 00:21:32.983 "trsvcid": "47788" 00:21:32.983 }, 00:21:32.983 "auth": { 00:21:32.983 "state": "completed", 00:21:32.983 "digest": "sha512", 00:21:32.983 "dhgroup": "ffdhe8192" 00:21:32.983 } 00:21:32.983 } 00:21:32.983 ]' 00:21:32.983 00:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:32.983 00:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.983 00:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:32.983 00:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:32.983 00:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:32.983 00:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.983 00:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.244 00:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.244 00:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:01:YTY0ZjAxMmExYzEwNzc5MjAxMmZhYzA2NGFlODhmZTjO55Yc: 00:21:33.814 00:36:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 2 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key2 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:34.072 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:34.640 00:21:34.640 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:34.640 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:34.640 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.640 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.640 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.640 00:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:34.640 00:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.640 00:37:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:34.899 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:34.899 { 00:21:34.899 "cntlid": 141, 00:21:34.899 "qid": 0, 00:21:34.899 "state": "enabled", 00:21:34.899 "listen_address": { 00:21:34.899 "trtype": "TCP", 00:21:34.899 "adrfam": "IPv4", 00:21:34.899 "traddr": "10.0.0.2", 00:21:34.899 "trsvcid": "4420" 00:21:34.899 }, 00:21:34.899 "peer_address": { 00:21:34.899 "trtype": "TCP", 00:21:34.899 "adrfam": "IPv4", 00:21:34.899 "traddr": "10.0.0.1", 00:21:34.899 "trsvcid": "47804" 00:21:34.899 }, 00:21:34.899 "auth": { 00:21:34.899 "state": "completed", 00:21:34.899 "digest": "sha512", 00:21:34.899 "dhgroup": "ffdhe8192" 00:21:34.899 } 00:21:34.899 } 00:21:34.899 ]' 00:21:34.899 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:34.899 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.899 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:34.899 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:34.899 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:34.899 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.899 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.899 00:37:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.899 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:02:NDRmN2Q2ZTBkN2Q5ZmQ5NzRjOGNmMzVjMmUwMGM0NjMwNmE5YWUzZWZmZTAwYjljY+xEjw==: 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # for keyid in "${!keys[@]}" 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@87 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@89 -- # connect_authenticate sha512 ffdhe8192 3 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key3 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:35.836 00:37:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:36.094 00:21:36.354 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:36.354 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:36.355 { 00:21:36.355 "cntlid": 143, 00:21:36.355 "qid": 0, 00:21:36.355 "state": "enabled", 00:21:36.355 "listen_address": { 00:21:36.355 "trtype": "TCP", 00:21:36.355 "adrfam": "IPv4", 00:21:36.355 "traddr": "10.0.0.2", 00:21:36.355 "trsvcid": "4420" 00:21:36.355 }, 00:21:36.355 "peer_address": { 00:21:36.355 "trtype": "TCP", 00:21:36.355 "adrfam": "IPv4", 00:21:36.355 "traddr": "10.0.0.1", 00:21:36.355 "trsvcid": "47838" 00:21:36.355 }, 00:21:36.355 "auth": { 00:21:36.355 "state": "completed", 00:21:36.355 "digest": "sha512", 00:21:36.355 "dhgroup": "ffdhe8192" 00:21:36.355 } 00:21:36.355 } 00:21:36.355 ]' 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.355 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.616 00:37:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:03:MDAyNWQzNjRjOGQxM2M1OGZkMmUwN2QwMjMzZDNjNGY5MjRlMDQ1YmUxNTllZjkyODc2MzhjOWNmMjc5ZDA0Y6xyGPs=: 00:21:37.188 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.188 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.188 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:37.188 00:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.188 00:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.188 00:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.188 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:21:37.188 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s sha256,sha384,sha512 00:21:37.188 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # IFS=, 00:21:37.188 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:37.188 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@95 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:37.188 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:37.447 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@107 -- # connect_authenticate sha512 ffdhe8192 0 00:21:37.447 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key qpairs 00:21:37.447 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.447 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:37.447 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:37.447 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@38 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key0 00:21:37.447 00:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.447 00:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.447 00:37:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.447 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:37.448 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:37.705 00:21:37.963 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # hostrpc bdev_nvme_get_controllers 00:21:37.963 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # jq -r '.[].name' 00:21:37.963 00:37:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.963 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@43 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.963 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.963 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:37.963 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.963 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:37.963 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # qpairs='[ 00:21:37.963 { 00:21:37.963 "cntlid": 145, 00:21:37.963 "qid": 0, 00:21:37.963 "state": "enabled", 00:21:37.963 "listen_address": { 00:21:37.963 "trtype": "TCP", 00:21:37.963 "adrfam": "IPv4", 00:21:37.963 "traddr": "10.0.0.2", 00:21:37.963 "trsvcid": "4420" 00:21:37.963 }, 00:21:37.963 "peer_address": { 00:21:37.963 "trtype": "TCP", 00:21:37.963 "adrfam": "IPv4", 00:21:37.963 "traddr": "10.0.0.1", 00:21:37.963 "trsvcid": "47882" 00:21:37.963 }, 00:21:37.963 "auth": { 00:21:37.963 "state": "completed", 00:21:37.963 "digest": "sha512", 00:21:37.963 "dhgroup": "ffdhe8192" 00:21:37.963 } 00:21:37.963 } 00:21:37.963 ]' 00:21:37.963 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # jq -r '.[0].auth.digest' 00:21:37.963 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.963 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.dhgroup' 00:21:37.963 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.963 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.state' 00:21:38.221 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.221 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.221 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.221 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@51 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid 80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-secret DHHC-1:00:NDRjNGM1MzA4ZDVmMGQ5OGYyNDAyZDg1MDJiNGRhYTY1NjBlYWJmNGZlYWE4MDc52HBcqg==: 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@53 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@54 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@110 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --dhchap-key key1 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@111 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@649 -- # local es=0 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@637 -- # local arg=hostrpc 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # type -t hostrpc 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:38.791 00:37:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:39.359 request: 00:21:39.359 { 00:21:39.359 "name": "nvme0", 00:21:39.359 "trtype": "tcp", 00:21:39.359 "traddr": "10.0.0.2", 00:21:39.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda", 00:21:39.359 "adrfam": "ipv4", 00:21:39.359 "trsvcid": "4420", 00:21:39.359 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:39.359 "dhchap_key": "key2", 00:21:39.359 "method": "bdev_nvme_attach_controller", 00:21:39.360 "req_id": 1 00:21:39.360 } 00:21:39.360 Got JSON-RPC error response 00:21:39.360 response: 00:21:39.360 { 00:21:39.360 "code": -32602, 00:21:39.360 "message": "Invalid parameters" 00:21:39.360 } 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@652 -- # es=1 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # cleanup 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2022564 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 2022564 ']' 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 2022564 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2022564 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2022564' 00:21:39.360 killing process with pid 2022564 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 2022564 00:21:39.360 00:37:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 2022564 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.295 rmmod nvme_tcp 00:21:40.295 rmmod nvme_fabrics 00:21:40.295 rmmod nvme_keyring 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2022416 ']' 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2022416 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@947 -- # '[' -z 2022416 ']' 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@951 -- # kill -0 2022416 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # uname 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2022416 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2022416' 00:21:40.295 killing process with pid 2022416 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # kill 2022416 00:21:40.295 00:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@971 -- # wait 2022416 00:21:40.862 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:40.862 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:40.862 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:40.862 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:40.862 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:40.862 00:37:06 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:40.862 00:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:40.862 00:37:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.765 00:37:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:42.765 00:37:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.7Td /tmp/spdk.key-sha256.9KM /tmp/spdk.key-sha384.KbU /tmp/spdk.key-sha512.yi4 /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf-auth.log 00:21:42.765 00:21:42.765 real 1m59.143s 00:21:42.765 user 4m21.655s 00:21:42.765 sys 0m19.127s 00:21:42.765 00:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:42.765 00:37:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.765 ************************************ 00:21:42.765 END TEST nvmf_auth_target 00:21:42.765 ************************************ 00:21:42.765 00:37:08 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:21:42.765 00:37:08 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:42.765 00:37:08 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 4 -le 1 ']' 00:21:42.765 00:37:08 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:42.765 00:37:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:42.765 ************************************ 00:21:42.765 START TEST nvmf_bdevio_no_huge 00:21:42.765 ************************************ 00:21:42.765 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:43.028 * Looking for test storage... 00:21:43.028 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:43.028 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:43.029 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:43.029 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:43.029 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:43.029 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.029 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.029 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:43.029 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:21:43.029 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:43.029 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:21:43.029 00:37:08 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:49.602 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:49.602 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:49.602 Found net devices under 0000:27:00.0: cvl_0_0 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:49.602 Found net devices under 0000:27:00.1: cvl_0_1 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:49.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:21:49.602 00:21:49.602 --- 10.0.0.2 ping statistics --- 00:21:49.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.602 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:21:49.602 00:21:49.602 --- 10.0.0.1 ping statistics --- 00:21:49.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.602 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@721 -- # xtrace_disable 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2049283 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2049283 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@828 -- # '[' -z 2049283 ']' 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local max_retries=100 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # xtrace_disable 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:49.602 00:37:15 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:49.602 [2024-05-15 00:37:15.467691] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:21:49.602 [2024-05-15 00:37:15.467826] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:49.602 [2024-05-15 00:37:15.634150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.602 [2024-05-15 00:37:15.762180] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.603 [2024-05-15 00:37:15.762233] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.603 [2024-05-15 00:37:15.762245] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.603 [2024-05-15 00:37:15.762256] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.603 [2024-05-15 00:37:15.762265] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.603 [2024-05-15 00:37:15.762478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:49.603 [2024-05-15 00:37:15.762662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:21:49.603 [2024-05-15 00:37:15.762770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.603 [2024-05-15 00:37:15.762796] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@861 -- # return 0 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@727 -- # xtrace_disable 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.172 [2024-05-15 00:37:16.229570] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.172 Malloc0 00:21:50.172 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:50.173 [2024-05-15 00:37:16.293363] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:21:50.173 [2024-05-15 00:37:16.293754] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:50.173 { 00:21:50.173 "params": { 00:21:50.173 "name": "Nvme$subsystem", 00:21:50.173 "trtype": "$TEST_TRANSPORT", 00:21:50.173 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.173 "adrfam": "ipv4", 00:21:50.173 "trsvcid": "$NVMF_PORT", 00:21:50.173 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.173 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.173 "hdgst": ${hdgst:-false}, 00:21:50.173 "ddgst": ${ddgst:-false} 00:21:50.173 }, 00:21:50.173 "method": "bdev_nvme_attach_controller" 00:21:50.173 } 00:21:50.173 EOF 00:21:50.173 )") 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:21:50.173 00:37:16 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:50.173 "params": { 00:21:50.173 "name": "Nvme1", 00:21:50.173 "trtype": "tcp", 00:21:50.173 "traddr": "10.0.0.2", 00:21:50.173 "adrfam": "ipv4", 00:21:50.173 "trsvcid": "4420", 00:21:50.173 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.173 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.173 "hdgst": false, 00:21:50.173 "ddgst": false 00:21:50.173 }, 00:21:50.173 "method": "bdev_nvme_attach_controller" 00:21:50.173 }' 00:21:50.432 [2024-05-15 00:37:16.376078] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:21:50.432 [2024-05-15 00:37:16.376212] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2049599 ] 00:21:50.432 [2024-05-15 00:37:16.524854] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:50.692 [2024-05-15 00:37:16.645475] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.692 [2024-05-15 00:37:16.645564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.692 [2024-05-15 00:37:16.645573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.952 I/O targets: 00:21:50.952 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:50.952 00:21:50.952 00:21:50.952 CUnit - A unit testing framework for C - Version 2.1-3 00:21:50.952 http://cunit.sourceforge.net/ 00:21:50.952 00:21:50.952 00:21:50.952 Suite: bdevio tests on: Nvme1n1 00:21:50.952 Test: blockdev write read block ...passed 00:21:50.952 Test: blockdev write zeroes read block ...passed 00:21:50.952 Test: blockdev write zeroes read no split ...passed 00:21:50.952 Test: blockdev write zeroes read split ...passed 00:21:50.952 Test: blockdev write zeroes read split partial ...passed 00:21:50.952 Test: blockdev reset ...[2024-05-15 00:37:17.079799] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:50.952 [2024-05-15 00:37:17.079904] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500039ce00 (9): Bad file descriptor 00:21:51.212 [2024-05-15 00:37:17.140489] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:51.212 passed 00:21:51.212 Test: blockdev write read 8 blocks ...passed 00:21:51.212 Test: blockdev write read size > 128k ...passed 00:21:51.212 Test: blockdev write read invalid size ...passed 00:21:51.212 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:51.212 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:51.212 Test: blockdev write read max offset ...passed 00:21:51.212 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:51.212 Test: blockdev writev readv 8 blocks ...passed 00:21:51.212 Test: blockdev writev readv 30 x 1block ...passed 00:21:51.472 Test: blockdev writev readv block ...passed 00:21:51.472 Test: blockdev writev readv size > 128k ...passed 00:21:51.472 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:51.472 Test: blockdev comparev and writev ...[2024-05-15 00:37:17.443051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.472 [2024-05-15 00:37:17.443093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:51.472 [2024-05-15 00:37:17.443113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.472 [2024-05-15 00:37:17.443123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:51.472 [2024-05-15 00:37:17.443371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.472 [2024-05-15 00:37:17.443382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:51.472 [2024-05-15 00:37:17.443397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.472 [2024-05-15 00:37:17.443407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:51.472 [2024-05-15 00:37:17.443667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.472 [2024-05-15 00:37:17.443677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:51.472 [2024-05-15 00:37:17.443693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.472 [2024-05-15 00:37:17.443701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:51.472 [2024-05-15 00:37:17.443967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.472 [2024-05-15 00:37:17.443978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:51.472 [2024-05-15 00:37:17.443991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:51.472 [2024-05-15 00:37:17.444000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:51.472 passed 00:21:51.472 Test: blockdev nvme passthru rw ...passed 00:21:51.472 Test: blockdev nvme passthru vendor specific ...[2024-05-15 00:37:17.528026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:51.472 [2024-05-15 00:37:17.528051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:51.472 [2024-05-15 00:37:17.528167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:51.472 [2024-05-15 00:37:17.528175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:51.472 [2024-05-15 00:37:17.528292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:51.472 [2024-05-15 00:37:17.528303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:51.472 [2024-05-15 00:37:17.528420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:51.472 [2024-05-15 00:37:17.528430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:51.472 passed 00:21:51.472 Test: blockdev nvme admin passthru ...passed 00:21:51.472 Test: blockdev copy ...passed 00:21:51.472 00:21:51.472 Run Summary: Type Total Ran Passed Failed Inactive 00:21:51.472 suites 1 1 n/a 0 0 00:21:51.472 tests 23 23 23 0 0 00:21:51.472 asserts 152 152 152 0 n/a 00:21:51.472 00:21:51.472 Elapsed time = 1.371 seconds 00:21:52.043 00:37:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:52.043 00:37:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@560 -- # xtrace_disable 00:21:52.043 00:37:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:52.043 00:37:17 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:21:52.043 00:37:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:52.043 00:37:17 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:52.043 00:37:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:52.043 00:37:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:21:52.043 00:37:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:52.043 00:37:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:21:52.043 00:37:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:52.043 00:37:17 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:52.043 rmmod nvme_tcp 00:21:52.043 rmmod nvme_fabrics 00:21:52.043 rmmod nvme_keyring 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2049283 ']' 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2049283 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@947 -- # '[' -z 2049283 ']' 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@951 -- # kill -0 2049283 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # uname 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2049283 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # process_name=reactor_3 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' reactor_3 = sudo ']' 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2049283' 00:21:52.043 killing process with pid 2049283 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # kill 2049283 00:21:52.043 [2024-05-15 00:37:18.076953] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:21:52.043 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # wait 2049283 00:21:52.613 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:52.613 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:52.613 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:52.613 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.613 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:52.613 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.613 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:52.613 00:37:18 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.564 00:37:20 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:54.564 00:21:54.564 real 0m11.653s 00:21:54.564 user 0m15.057s 00:21:54.564 sys 0m5.798s 00:21:54.564 00:37:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # xtrace_disable 00:21:54.564 00:37:20 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:54.564 ************************************ 00:21:54.564 END TEST nvmf_bdevio_no_huge 00:21:54.564 ************************************ 00:21:54.564 00:37:20 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:54.564 00:37:20 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:21:54.564 00:37:20 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:21:54.564 00:37:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:54.564 ************************************ 00:21:54.564 START TEST nvmf_tls 00:21:54.564 ************************************ 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:54.564 * Looking for test storage... 00:21:54.564 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:21:54.564 00:37:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:59.870 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:59.870 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:59.870 Found net devices under 0000:27:00.0: cvl_0_0 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:59.870 Found net devices under 0000:27:00.1: cvl_0_1 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.870 00:37:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.870 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.870 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.870 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:59.870 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:00.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.618 ms 00:22:00.130 00:22:00.130 --- 10.0.0.2 ping statistics --- 00:22:00.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.130 rtt min/avg/max/mdev = 0.618/0.618/0.618/0.000 ms 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:00.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:22:00.130 00:22:00.130 --- 10.0.0.1 ping statistics --- 00:22:00.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.130 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:00.130 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:00.131 00:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:00.131 00:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.131 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2053790 00:22:00.131 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2053790 00:22:00.131 00:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2053790 ']' 00:22:00.131 00:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.131 00:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:00.131 00:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.131 00:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:00.131 00:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.131 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:00.131 [2024-05-15 00:37:26.224376] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:00.131 [2024-05-15 00:37:26.224475] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.391 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.391 [2024-05-15 00:37:26.370574] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.391 [2024-05-15 00:37:26.528866] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.391 [2024-05-15 00:37:26.528923] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.391 [2024-05-15 00:37:26.528939] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.391 [2024-05-15 00:37:26.528954] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.391 [2024-05-15 00:37:26.528967] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.391 [2024-05-15 00:37:26.529007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.958 00:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:00.958 00:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:00.958 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:00.958 00:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:00.958 00:37:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.958 00:37:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.958 00:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:00.958 00:37:26 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:00.958 true 00:22:01.217 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.217 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:01.217 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:01.217 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:01.217 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:01.475 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.475 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:01.475 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:01.475 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:01.475 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:01.733 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.733 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:01.733 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:01.733 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:01.733 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.733 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:01.992 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:01.992 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:01.992 00:37:27 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:01.992 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:01.992 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:02.250 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:02.250 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:02.250 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:02.250 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:02.250 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.gLNcoGeD68 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.6ygtWbFycK 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.gLNcoGeD68 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6ygtWbFycK 00:22:02.509 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:02.769 00:37:28 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:03.028 00:37:29 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.gLNcoGeD68 00:22:03.028 00:37:29 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gLNcoGeD68 00:22:03.028 00:37:29 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:03.028 [2024-05-15 00:37:29.137857] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.028 00:37:29 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:03.286 00:37:29 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:03.286 [2024-05-15 00:37:29.409862] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:03.286 [2024-05-15 00:37:29.409937] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:03.286 [2024-05-15 00:37:29.410128] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.286 00:37:29 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:03.545 malloc0 00:22:03.545 00:37:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:03.803 00:37:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gLNcoGeD68 00:22:03.804 [2024-05-15 00:37:29.842536] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:03.804 00:37:29 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.gLNcoGeD68 00:22:03.804 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.016 Initializing NVMe Controllers 00:22:16.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:16.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:16.016 Initialization complete. Launching workers. 00:22:16.016 ======================================================== 00:22:16.016 Latency(us) 00:22:16.016 Device Information : IOPS MiB/s Average min max 00:22:16.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17184.94 67.13 3724.52 1068.39 5462.11 00:22:16.016 ======================================================== 00:22:16.016 Total : 17184.94 67.13 3724.52 1068.39 5462.11 00:22:16.016 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gLNcoGeD68 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gLNcoGeD68' 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2056521 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2056521 /var/tmp/bdevperf.sock 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2056521 ']' 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.016 [2024-05-15 00:37:40.120309] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:16.016 [2024-05-15 00:37:40.120458] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2056521 ] 00:22:16.016 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.016 [2024-05-15 00:37:40.250856] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.016 [2024-05-15 00:37:40.342039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:16.016 00:37:40 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gLNcoGeD68 00:22:16.016 [2024-05-15 00:37:40.948132] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.016 [2024-05-15 00:37:40.948239] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:16.016 TLSTESTn1 00:22:16.016 00:37:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:16.016 Running I/O for 10 seconds... 00:22:25.996 00:22:25.996 Latency(us) 00:22:25.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.996 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:25.996 Verification LBA range: start 0x0 length 0x2000 00:22:25.996 TLSTESTn1 : 10.01 5302.78 20.71 0.00 0.00 24105.89 4622.01 35596.40 00:22:25.996 =================================================================================================================== 00:22:25.996 Total : 5302.78 20.71 0.00 0.00 24105.89 4622.01 35596.40 00:22:25.996 0 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2056521 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2056521 ']' 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2056521 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2056521 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2056521' 00:22:25.996 killing process with pid 2056521 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2056521 00:22:25.996 Received shutdown signal, test time was about 10.000000 seconds 00:22:25.996 00:22:25.996 Latency(us) 00:22:25.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.996 =================================================================================================================== 00:22:25.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.996 [2024-05-15 00:37:51.175677] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2056521 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6ygtWbFycK 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:25.996 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6ygtWbFycK 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6ygtWbFycK 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6ygtWbFycK' 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2058826 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2058826 /var/tmp/bdevperf.sock 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2058826 ']' 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:25.997 00:37:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:25.997 [2024-05-15 00:37:51.657408] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:25.997 [2024-05-15 00:37:51.657548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058826 ] 00:22:25.997 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.997 [2024-05-15 00:37:51.788690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.997 [2024-05-15 00:37:51.885299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.253 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:26.253 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:26.253 00:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6ygtWbFycK 00:22:26.510 [2024-05-15 00:37:52.482082] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:26.511 [2024-05-15 00:37:52.482187] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:26.511 [2024-05-15 00:37:52.490887] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:26.511 [2024-05-15 00:37:52.491801] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1180 (107): Transport endpoint is not connected 00:22:26.511 [2024-05-15 00:37:52.492781] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1180 (9): Bad file descriptor 00:22:26.511 [2024-05-15 00:37:52.493776] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:26.511 [2024-05-15 00:37:52.493801] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:26.511 [2024-05-15 00:37:52.493813] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:26.511 request: 00:22:26.511 { 00:22:26.511 "name": "TLSTEST", 00:22:26.511 "trtype": "tcp", 00:22:26.511 "traddr": "10.0.0.2", 00:22:26.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:26.511 "adrfam": "ipv4", 00:22:26.511 "trsvcid": "4420", 00:22:26.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.511 "psk": "/tmp/tmp.6ygtWbFycK", 00:22:26.511 "method": "bdev_nvme_attach_controller", 00:22:26.511 "req_id": 1 00:22:26.511 } 00:22:26.511 Got JSON-RPC error response 00:22:26.511 response: 00:22:26.511 { 00:22:26.511 "code": -32602, 00:22:26.511 "message": "Invalid parameters" 00:22:26.511 } 00:22:26.511 00:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2058826 00:22:26.511 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2058826 ']' 00:22:26.511 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2058826 00:22:26.511 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:26.511 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:26.511 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2058826 00:22:26.511 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:26.511 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:26.511 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2058826' 00:22:26.511 killing process with pid 2058826 00:22:26.511 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2058826 00:22:26.511 Received shutdown signal, test time was about 10.000000 seconds 00:22:26.511 00:22:26.511 Latency(us) 00:22:26.511 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.511 =================================================================================================================== 00:22:26.511 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:26.511 [2024-05-15 00:37:52.548558] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:26.511 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2058826 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gLNcoGeD68 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gLNcoGeD68 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.gLNcoGeD68 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gLNcoGeD68' 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2059016 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2059016 /var/tmp/bdevperf.sock 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2059016 ']' 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:26.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:26.770 00:37:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:27.030 [2024-05-15 00:37:52.983420] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:27.030 [2024-05-15 00:37:52.983567] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2059016 ] 00:22:27.030 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.030 [2024-05-15 00:37:53.099032] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.290 [2024-05-15 00:37:53.196863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.550 00:37:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:27.550 00:37:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:27.550 00:37:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.gLNcoGeD68 00:22:27.807 [2024-05-15 00:37:53.830211] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.807 [2024-05-15 00:37:53.830309] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:27.807 [2024-05-15 00:37:53.840827] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:27.807 [2024-05-15 00:37:53.840857] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:27.807 [2024-05-15 00:37:53.840894] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:27.807 [2024-05-15 00:37:53.841612] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1180 (107): Transport endpoint is not connected 00:22:27.807 [2024-05-15 00:37:53.842593] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1180 (9): Bad file descriptor 00:22:27.807 [2024-05-15 00:37:53.843588] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:27.807 [2024-05-15 00:37:53.843608] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:27.807 [2024-05-15 00:37:53.843620] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:27.807 request: 00:22:27.807 { 00:22:27.807 "name": "TLSTEST", 00:22:27.807 "trtype": "tcp", 00:22:27.808 "traddr": "10.0.0.2", 00:22:27.808 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:27.808 "adrfam": "ipv4", 00:22:27.808 "trsvcid": "4420", 00:22:27.808 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:27.808 "psk": "/tmp/tmp.gLNcoGeD68", 00:22:27.808 "method": "bdev_nvme_attach_controller", 00:22:27.808 "req_id": 1 00:22:27.808 } 00:22:27.808 Got JSON-RPC error response 00:22:27.808 response: 00:22:27.808 { 00:22:27.808 "code": -32602, 00:22:27.808 "message": "Invalid parameters" 00:22:27.808 } 00:22:27.808 00:37:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2059016 00:22:27.808 00:37:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2059016 ']' 00:22:27.808 00:37:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2059016 00:22:27.808 00:37:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:27.808 00:37:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:27.808 00:37:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2059016 00:22:27.808 00:37:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:27.808 00:37:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:27.808 00:37:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2059016' 00:22:27.808 killing process with pid 2059016 00:22:27.808 00:37:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2059016 00:22:27.808 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.808 00:22:27.808 Latency(us) 00:22:27.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.808 =================================================================================================================== 00:22:27.808 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:27.808 [2024-05-15 00:37:53.899287] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:27.808 00:37:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2059016 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gLNcoGeD68 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gLNcoGeD68 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.gLNcoGeD68 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gLNcoGeD68' 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2059233 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2059233 /var/tmp/bdevperf.sock 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2059233 ']' 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:28.373 00:37:54 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.373 [2024-05-15 00:37:54.317684] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:28.373 [2024-05-15 00:37:54.317800] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2059233 ] 00:22:28.373 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.373 [2024-05-15 00:37:54.427513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.373 [2024-05-15 00:37:54.522838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.942 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:28.942 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:28.942 00:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gLNcoGeD68 00:22:29.203 [2024-05-15 00:37:55.176007] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.203 [2024-05-15 00:37:55.176104] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:29.203 [2024-05-15 00:37:55.183087] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:29.203 [2024-05-15 00:37:55.183117] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:29.203 [2024-05-15 00:37:55.183152] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:29.203 [2024-05-15 00:37:55.183502] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1180 (107): Transport endpoint is not connected 00:22:29.203 [2024-05-15 00:37:55.184484] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1180 (9): Bad file descriptor 00:22:29.203 [2024-05-15 00:37:55.185478] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:29.203 [2024-05-15 00:37:55.185498] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:29.203 [2024-05-15 00:37:55.185511] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:29.203 request: 00:22:29.203 { 00:22:29.203 "name": "TLSTEST", 00:22:29.203 "trtype": "tcp", 00:22:29.203 "traddr": "10.0.0.2", 00:22:29.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:29.203 "adrfam": "ipv4", 00:22:29.203 "trsvcid": "4420", 00:22:29.203 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:29.203 "psk": "/tmp/tmp.gLNcoGeD68", 00:22:29.203 "method": "bdev_nvme_attach_controller", 00:22:29.203 "req_id": 1 00:22:29.203 } 00:22:29.203 Got JSON-RPC error response 00:22:29.203 response: 00:22:29.203 { 00:22:29.203 "code": -32602, 00:22:29.203 "message": "Invalid parameters" 00:22:29.203 } 00:22:29.203 00:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2059233 00:22:29.203 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2059233 ']' 00:22:29.203 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2059233 00:22:29.203 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:29.203 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:29.203 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2059233 00:22:29.203 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:29.203 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:29.203 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2059233' 00:22:29.203 killing process with pid 2059233 00:22:29.203 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2059233 00:22:29.203 Received shutdown signal, test time was about 10.000000 seconds 00:22:29.203 00:22:29.203 Latency(us) 00:22:29.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:29.203 =================================================================================================================== 00:22:29.203 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:29.203 [2024-05-15 00:37:55.259908] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:29.203 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2059233 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2059543 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2059543 /var/tmp/bdevperf.sock 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2059543 ']' 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:29.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:29.462 00:37:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:29.723 [2024-05-15 00:37:55.707493] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:29.723 [2024-05-15 00:37:55.707645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2059543 ] 00:22:29.723 EAL: No free 2048 kB hugepages reported on node 1 00:22:29.723 [2024-05-15 00:37:55.837812] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:29.981 [2024-05-15 00:37:55.937090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:30.550 [2024-05-15 00:37:56.581297] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:30.550 [2024-05-15 00:37:56.582689] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0c80 (9): Bad file descriptor 00:22:30.550 [2024-05-15 00:37:56.583676] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:30.550 [2024-05-15 00:37:56.583698] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:30.550 [2024-05-15 00:37:56.583712] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:30.550 request: 00:22:30.550 { 00:22:30.550 "name": "TLSTEST", 00:22:30.550 "trtype": "tcp", 00:22:30.550 "traddr": "10.0.0.2", 00:22:30.550 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:30.550 "adrfam": "ipv4", 00:22:30.550 "trsvcid": "4420", 00:22:30.550 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.550 "method": "bdev_nvme_attach_controller", 00:22:30.550 "req_id": 1 00:22:30.550 } 00:22:30.550 Got JSON-RPC error response 00:22:30.550 response: 00:22:30.550 { 00:22:30.550 "code": -32602, 00:22:30.550 "message": "Invalid parameters" 00:22:30.550 } 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2059543 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2059543 ']' 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2059543 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2059543 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2059543' 00:22:30.550 killing process with pid 2059543 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2059543 00:22:30.550 Received shutdown signal, test time was about 10.000000 seconds 00:22:30.550 00:22:30.550 Latency(us) 00:22:30.550 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:30.550 =================================================================================================================== 00:22:30.550 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:30.550 00:37:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2059543 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 2053790 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2053790 ']' 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2053790 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2053790 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2053790' 00:22:31.119 killing process with pid 2053790 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2053790 00:22:31.119 [2024-05-15 00:37:57.054956] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:31.119 [2024-05-15 00:37:57.055007] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:31.119 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2053790 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.yZEmyQR5mU 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.yZEmyQR5mU 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2059953 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2059953 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2059953 ']' 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:31.690 00:37:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:31.690 [2024-05-15 00:37:57.762898] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:31.690 [2024-05-15 00:37:57.763029] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.950 EAL: No free 2048 kB hugepages reported on node 1 00:22:31.950 [2024-05-15 00:37:57.907267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.950 [2024-05-15 00:37:58.007046] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.950 [2024-05-15 00:37:58.007099] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.950 [2024-05-15 00:37:58.007110] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.950 [2024-05-15 00:37:58.007121] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.950 [2024-05-15 00:37:58.007130] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.950 [2024-05-15 00:37:58.007164] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.520 00:37:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:32.520 00:37:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:32.520 00:37:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:32.520 00:37:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:32.520 00:37:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:32.521 00:37:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:32.521 00:37:58 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.yZEmyQR5mU 00:22:32.521 00:37:58 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.yZEmyQR5mU 00:22:32.521 00:37:58 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:32.781 [2024-05-15 00:37:58.714579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:32.781 00:37:58 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:32.781 00:37:58 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:33.041 [2024-05-15 00:37:59.010568] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:33.041 [2024-05-15 00:37:59.010664] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:33.041 [2024-05-15 00:37:59.010919] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.041 00:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:33.041 malloc0 00:22:33.299 00:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:33.299 00:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yZEmyQR5mU 00:22:33.559 [2024-05-15 00:37:59.501754] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yZEmyQR5mU 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.yZEmyQR5mU' 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2060451 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2060451 /var/tmp/bdevperf.sock 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2060451 ']' 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:33.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:33.559 00:37:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:33.559 [2024-05-15 00:37:59.604652] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:33.559 [2024-05-15 00:37:59.604796] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2060451 ] 00:22:33.559 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.818 [2024-05-15 00:37:59.733895] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.818 [2024-05-15 00:37:59.831454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:34.384 00:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:34.384 00:38:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:34.384 00:38:00 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yZEmyQR5mU 00:22:34.384 [2024-05-15 00:38:00.431216] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:34.384 [2024-05-15 00:38:00.431326] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:34.384 TLSTESTn1 00:22:34.384 00:38:00 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:34.642 Running I/O for 10 seconds... 00:22:44.663 00:22:44.663 Latency(us) 00:22:44.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.663 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:44.663 Verification LBA range: start 0x0 length 0x2000 00:22:44.663 TLSTESTn1 : 10.01 5580.41 21.80 0.00 0.00 22904.83 5242.88 30491.49 00:22:44.663 =================================================================================================================== 00:22:44.663 Total : 5580.41 21.80 0.00 0.00 22904.83 5242.88 30491.49 00:22:44.663 0 00:22:44.663 00:38:10 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:44.663 00:38:10 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 2060451 00:22:44.663 00:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2060451 ']' 00:22:44.663 00:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2060451 00:22:44.663 00:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:44.663 00:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:44.663 00:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2060451 00:22:44.663 00:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:44.663 00:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:44.663 00:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2060451' 00:22:44.663 killing process with pid 2060451 00:22:44.663 00:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2060451 00:22:44.663 Received shutdown signal, test time was about 10.000000 seconds 00:22:44.663 00:22:44.663 Latency(us) 00:22:44.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.663 =================================================================================================================== 00:22:44.663 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.663 [2024-05-15 00:38:10.661987] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:44.663 00:38:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2060451 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.yZEmyQR5mU 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yZEmyQR5mU 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yZEmyQR5mU 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=run_bdevperf 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t run_bdevperf 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yZEmyQR5mU 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.yZEmyQR5mU' 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2062571 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2062571 /var/tmp/bdevperf.sock 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2062571 ']' 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:44.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.922 00:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:45.182 [2024-05-15 00:38:11.141281] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:45.182 [2024-05-15 00:38:11.141418] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2062571 ] 00:22:45.182 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.182 [2024-05-15 00:38:11.270618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.442 [2024-05-15 00:38:11.366594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:46.012 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:46.012 00:38:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:46.012 00:38:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yZEmyQR5mU 00:22:46.012 [2024-05-15 00:38:12.032012] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:46.012 [2024-05-15 00:38:12.032091] bdev_nvme.c:6105:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:46.012 [2024-05-15 00:38:12.032104] bdev_nvme.c:6214:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.yZEmyQR5mU 00:22:46.012 request: 00:22:46.012 { 00:22:46.012 "name": "TLSTEST", 00:22:46.012 "trtype": "tcp", 00:22:46.012 "traddr": "10.0.0.2", 00:22:46.012 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:46.012 "adrfam": "ipv4", 00:22:46.012 "trsvcid": "4420", 00:22:46.012 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:46.012 "psk": "/tmp/tmp.yZEmyQR5mU", 00:22:46.012 "method": "bdev_nvme_attach_controller", 00:22:46.012 "req_id": 1 00:22:46.012 } 00:22:46.012 Got JSON-RPC error response 00:22:46.012 response: 00:22:46.012 { 00:22:46.012 "code": -1, 00:22:46.012 "message": "Operation not permitted" 00:22:46.012 } 00:22:46.012 00:38:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 2062571 00:22:46.012 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2062571 ']' 00:22:46.012 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2062571 00:22:46.012 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:46.012 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:46.012 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2062571 00:22:46.012 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:46.012 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:46.012 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2062571' 00:22:46.012 killing process with pid 2062571 00:22:46.012 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2062571 00:22:46.012 Received shutdown signal, test time was about 10.000000 seconds 00:22:46.012 00:22:46.012 Latency(us) 00:22:46.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.012 =================================================================================================================== 00:22:46.012 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:46.012 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2062571 00:22:46.582 00:38:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:46.582 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:46.582 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:46.582 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:46.582 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:46.582 00:38:12 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 2059953 00:22:46.582 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2059953 ']' 00:22:46.583 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2059953 00:22:46.583 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:46.583 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:46.583 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2059953 00:22:46.583 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:46.583 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:46.583 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2059953' 00:22:46.583 killing process with pid 2059953 00:22:46.583 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2059953 00:22:46.583 [2024-05-15 00:38:12.505214] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:46.583 [2024-05-15 00:38:12.505288] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:46.583 00:38:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2059953 00:22:47.153 00:38:13 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:47.153 00:38:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:47.153 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:47.153 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.153 00:38:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2062887 00:22:47.153 00:38:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2062887 00:22:47.153 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2062887 ']' 00:22:47.153 00:38:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:47.153 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.153 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:47.153 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.153 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:47.153 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.153 [2024-05-15 00:38:13.121879] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:47.153 [2024-05-15 00:38:13.122016] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.153 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.153 [2024-05-15 00:38:13.268666] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.415 [2024-05-15 00:38:13.368475] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.415 [2024-05-15 00:38:13.368528] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.415 [2024-05-15 00:38:13.368538] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.415 [2024-05-15 00:38:13.368557] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.415 [2024-05-15 00:38:13.368565] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.415 [2024-05-15 00:38:13.368609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.674 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:47.674 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:47.674 00:38:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:47.934 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:47.934 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.934 00:38:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:47.934 00:38:13 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.yZEmyQR5mU 00:22:47.934 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@649 -- # local es=0 00:22:47.934 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.yZEmyQR5mU 00:22:47.934 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@637 -- # local arg=setup_nvmf_tgt 00:22:47.934 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:47.934 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # type -t setup_nvmf_tgt 00:22:47.934 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:22:47.934 00:38:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # setup_nvmf_tgt /tmp/tmp.yZEmyQR5mU 00:22:47.934 00:38:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.yZEmyQR5mU 00:22:47.934 00:38:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:47.934 [2024-05-15 00:38:14.016225] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:47.934 00:38:14 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:48.194 00:38:14 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:48.194 [2024-05-15 00:38:14.312226] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:48.194 [2024-05-15 00:38:14.312327] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:48.194 [2024-05-15 00:38:14.312578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:48.194 00:38:14 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:48.455 malloc0 00:22:48.455 00:38:14 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yZEmyQR5mU 00:22:48.715 [2024-05-15 00:38:14.775509] tcp.c:3575:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:48.715 [2024-05-15 00:38:14.775558] tcp.c:3661:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:48.715 [2024-05-15 00:38:14.775583] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:48.715 request: 00:22:48.715 { 00:22:48.715 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:48.715 "host": "nqn.2016-06.io.spdk:host1", 00:22:48.715 "psk": "/tmp/tmp.yZEmyQR5mU", 00:22:48.715 "method": "nvmf_subsystem_add_host", 00:22:48.715 "req_id": 1 00:22:48.715 } 00:22:48.715 Got JSON-RPC error response 00:22:48.715 response: 00:22:48.715 { 00:22:48.715 "code": -32603, 00:22:48.715 "message": "Internal error" 00:22:48.715 } 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@652 -- # es=1 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 2062887 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2062887 ']' 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2062887 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2062887 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2062887' 00:22:48.715 killing process with pid 2062887 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2062887 00:22:48.715 [2024-05-15 00:38:14.843357] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:48.715 00:38:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2062887 00:22:49.283 00:38:15 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.yZEmyQR5mU 00:22:49.283 00:38:15 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:49.283 00:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:49.283 00:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:49.283 00:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.283 00:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2063501 00:22:49.283 00:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2063501 00:22:49.283 00:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2063501 ']' 00:22:49.283 00:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:49.283 00:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:49.283 00:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:49.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:49.283 00:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:49.283 00:38:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.284 00:38:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:49.542 [2024-05-15 00:38:15.448633] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:49.542 [2024-05-15 00:38:15.448737] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:49.542 EAL: No free 2048 kB hugepages reported on node 1 00:22:49.542 [2024-05-15 00:38:15.566883] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.542 [2024-05-15 00:38:15.663376] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:49.542 [2024-05-15 00:38:15.663418] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:49.542 [2024-05-15 00:38:15.663428] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:49.542 [2024-05-15 00:38:15.663437] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:49.542 [2024-05-15 00:38:15.663445] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:49.542 [2024-05-15 00:38:15.663476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:50.110 00:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:50.110 00:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:50.110 00:38:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:50.110 00:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:50.110 00:38:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:50.110 00:38:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:50.110 00:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.yZEmyQR5mU 00:22:50.110 00:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.yZEmyQR5mU 00:22:50.110 00:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:50.370 [2024-05-15 00:38:16.285892] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:50.370 00:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:50.370 00:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:50.630 [2024-05-15 00:38:16.585960] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:50.630 [2024-05-15 00:38:16.586057] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:50.630 [2024-05-15 00:38:16.586319] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:50.630 00:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:50.630 malloc0 00:22:50.630 00:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:50.890 00:38:16 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yZEmyQR5mU 00:22:51.149 [2024-05-15 00:38:17.073656] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:51.149 00:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2063833 00:22:51.149 00:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.149 00:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2063833 /var/tmp/bdevperf.sock 00:22:51.149 00:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2063833 ']' 00:22:51.149 00:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.149 00:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:51.149 00:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:51.149 00:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.149 00:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:51.149 00:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.149 [2024-05-15 00:38:17.174308] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:51.149 [2024-05-15 00:38:17.174449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2063833 ] 00:22:51.149 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.149 [2024-05-15 00:38:17.294872] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.407 [2024-05-15 00:38:17.391352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.974 00:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:51.974 00:38:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:51.974 00:38:17 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yZEmyQR5mU 00:22:51.974 [2024-05-15 00:38:17.991371] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:51.974 [2024-05-15 00:38:17.991470] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:51.974 TLSTESTn1 00:22:51.974 00:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py save_config 00:22:52.233 00:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:52.233 "subsystems": [ 00:22:52.233 { 00:22:52.233 "subsystem": "keyring", 00:22:52.233 "config": [] 00:22:52.233 }, 00:22:52.233 { 00:22:52.233 "subsystem": "iobuf", 00:22:52.233 "config": [ 00:22:52.233 { 00:22:52.233 "method": "iobuf_set_options", 00:22:52.233 "params": { 00:22:52.233 "small_pool_count": 8192, 00:22:52.233 "large_pool_count": 1024, 00:22:52.233 "small_bufsize": 8192, 00:22:52.233 "large_bufsize": 135168 00:22:52.233 } 00:22:52.233 } 00:22:52.233 ] 00:22:52.233 }, 00:22:52.233 { 00:22:52.233 "subsystem": "sock", 00:22:52.233 "config": [ 00:22:52.233 { 00:22:52.233 "method": "sock_impl_set_options", 00:22:52.233 "params": { 00:22:52.234 "impl_name": "posix", 00:22:52.234 "recv_buf_size": 2097152, 00:22:52.234 "send_buf_size": 2097152, 00:22:52.234 "enable_recv_pipe": true, 00:22:52.234 "enable_quickack": false, 00:22:52.234 "enable_placement_id": 0, 00:22:52.234 "enable_zerocopy_send_server": true, 00:22:52.234 "enable_zerocopy_send_client": false, 00:22:52.234 "zerocopy_threshold": 0, 00:22:52.234 "tls_version": 0, 00:22:52.234 "enable_ktls": false 00:22:52.234 } 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "method": "sock_impl_set_options", 00:22:52.234 "params": { 00:22:52.234 "impl_name": "ssl", 00:22:52.234 "recv_buf_size": 4096, 00:22:52.234 "send_buf_size": 4096, 00:22:52.234 "enable_recv_pipe": true, 00:22:52.234 "enable_quickack": false, 00:22:52.234 "enable_placement_id": 0, 00:22:52.234 "enable_zerocopy_send_server": true, 00:22:52.234 "enable_zerocopy_send_client": false, 00:22:52.234 "zerocopy_threshold": 0, 00:22:52.234 "tls_version": 0, 00:22:52.234 "enable_ktls": false 00:22:52.234 } 00:22:52.234 } 00:22:52.234 ] 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "subsystem": "vmd", 00:22:52.234 "config": [] 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "subsystem": "accel", 00:22:52.234 "config": [ 00:22:52.234 { 00:22:52.234 "method": "accel_set_options", 00:22:52.234 "params": { 00:22:52.234 "small_cache_size": 128, 00:22:52.234 "large_cache_size": 16, 00:22:52.234 "task_count": 2048, 00:22:52.234 "sequence_count": 2048, 00:22:52.234 "buf_count": 2048 00:22:52.234 } 00:22:52.234 } 00:22:52.234 ] 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "subsystem": "bdev", 00:22:52.234 "config": [ 00:22:52.234 { 00:22:52.234 "method": "bdev_set_options", 00:22:52.234 "params": { 00:22:52.234 "bdev_io_pool_size": 65535, 00:22:52.234 "bdev_io_cache_size": 256, 00:22:52.234 "bdev_auto_examine": true, 00:22:52.234 "iobuf_small_cache_size": 128, 00:22:52.234 "iobuf_large_cache_size": 16 00:22:52.234 } 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "method": "bdev_raid_set_options", 00:22:52.234 "params": { 00:22:52.234 "process_window_size_kb": 1024 00:22:52.234 } 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "method": "bdev_iscsi_set_options", 00:22:52.234 "params": { 00:22:52.234 "timeout_sec": 30 00:22:52.234 } 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "method": "bdev_nvme_set_options", 00:22:52.234 "params": { 00:22:52.234 "action_on_timeout": "none", 00:22:52.234 "timeout_us": 0, 00:22:52.234 "timeout_admin_us": 0, 00:22:52.234 "keep_alive_timeout_ms": 10000, 00:22:52.234 "arbitration_burst": 0, 00:22:52.234 "low_priority_weight": 0, 00:22:52.234 "medium_priority_weight": 0, 00:22:52.234 "high_priority_weight": 0, 00:22:52.234 "nvme_adminq_poll_period_us": 10000, 00:22:52.234 "nvme_ioq_poll_period_us": 0, 00:22:52.234 "io_queue_requests": 0, 00:22:52.234 "delay_cmd_submit": true, 00:22:52.234 "transport_retry_count": 4, 00:22:52.234 "bdev_retry_count": 3, 00:22:52.234 "transport_ack_timeout": 0, 00:22:52.234 "ctrlr_loss_timeout_sec": 0, 00:22:52.234 "reconnect_delay_sec": 0, 00:22:52.234 "fast_io_fail_timeout_sec": 0, 00:22:52.234 "disable_auto_failback": false, 00:22:52.234 "generate_uuids": false, 00:22:52.234 "transport_tos": 0, 00:22:52.234 "nvme_error_stat": false, 00:22:52.234 "rdma_srq_size": 0, 00:22:52.234 "io_path_stat": false, 00:22:52.234 "allow_accel_sequence": false, 00:22:52.234 "rdma_max_cq_size": 0, 00:22:52.234 "rdma_cm_event_timeout_ms": 0, 00:22:52.234 "dhchap_digests": [ 00:22:52.234 "sha256", 00:22:52.234 "sha384", 00:22:52.234 "sha512" 00:22:52.234 ], 00:22:52.234 "dhchap_dhgroups": [ 00:22:52.234 "null", 00:22:52.234 "ffdhe2048", 00:22:52.234 "ffdhe3072", 00:22:52.234 "ffdhe4096", 00:22:52.234 "ffdhe6144", 00:22:52.234 "ffdhe8192" 00:22:52.234 ] 00:22:52.234 } 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "method": "bdev_nvme_set_hotplug", 00:22:52.234 "params": { 00:22:52.234 "period_us": 100000, 00:22:52.234 "enable": false 00:22:52.234 } 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "method": "bdev_malloc_create", 00:22:52.234 "params": { 00:22:52.234 "name": "malloc0", 00:22:52.234 "num_blocks": 8192, 00:22:52.234 "block_size": 4096, 00:22:52.234 "physical_block_size": 4096, 00:22:52.234 "uuid": "de400b5d-c985-4b8c-a574-eb12a21b423f", 00:22:52.234 "optimal_io_boundary": 0 00:22:52.234 } 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "method": "bdev_wait_for_examine" 00:22:52.234 } 00:22:52.234 ] 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "subsystem": "nbd", 00:22:52.234 "config": [] 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "subsystem": "scheduler", 00:22:52.234 "config": [ 00:22:52.234 { 00:22:52.234 "method": "framework_set_scheduler", 00:22:52.234 "params": { 00:22:52.234 "name": "static" 00:22:52.234 } 00:22:52.234 } 00:22:52.234 ] 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "subsystem": "nvmf", 00:22:52.234 "config": [ 00:22:52.234 { 00:22:52.234 "method": "nvmf_set_config", 00:22:52.234 "params": { 00:22:52.234 "discovery_filter": "match_any", 00:22:52.234 "admin_cmd_passthru": { 00:22:52.234 "identify_ctrlr": false 00:22:52.234 } 00:22:52.234 } 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "method": "nvmf_set_max_subsystems", 00:22:52.234 "params": { 00:22:52.234 "max_subsystems": 1024 00:22:52.234 } 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "method": "nvmf_set_crdt", 00:22:52.234 "params": { 00:22:52.234 "crdt1": 0, 00:22:52.234 "crdt2": 0, 00:22:52.234 "crdt3": 0 00:22:52.234 } 00:22:52.234 }, 00:22:52.234 { 00:22:52.234 "method": "nvmf_create_transport", 00:22:52.235 "params": { 00:22:52.235 "trtype": "TCP", 00:22:52.235 "max_queue_depth": 128, 00:22:52.235 "max_io_qpairs_per_ctrlr": 127, 00:22:52.235 "in_capsule_data_size": 4096, 00:22:52.235 "max_io_size": 131072, 00:22:52.235 "io_unit_size": 131072, 00:22:52.235 "max_aq_depth": 128, 00:22:52.235 "num_shared_buffers": 511, 00:22:52.235 "buf_cache_size": 4294967295, 00:22:52.235 "dif_insert_or_strip": false, 00:22:52.235 "zcopy": false, 00:22:52.235 "c2h_success": false, 00:22:52.235 "sock_priority": 0, 00:22:52.235 "abort_timeout_sec": 1, 00:22:52.235 "ack_timeout": 0, 00:22:52.235 "data_wr_pool_size": 0 00:22:52.235 } 00:22:52.235 }, 00:22:52.235 { 00:22:52.235 "method": "nvmf_create_subsystem", 00:22:52.235 "params": { 00:22:52.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.235 "allow_any_host": false, 00:22:52.235 "serial_number": "SPDK00000000000001", 00:22:52.235 "model_number": "SPDK bdev Controller", 00:22:52.235 "max_namespaces": 10, 00:22:52.235 "min_cntlid": 1, 00:22:52.235 "max_cntlid": 65519, 00:22:52.235 "ana_reporting": false 00:22:52.235 } 00:22:52.235 }, 00:22:52.235 { 00:22:52.235 "method": "nvmf_subsystem_add_host", 00:22:52.235 "params": { 00:22:52.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.235 "host": "nqn.2016-06.io.spdk:host1", 00:22:52.235 "psk": "/tmp/tmp.yZEmyQR5mU" 00:22:52.235 } 00:22:52.235 }, 00:22:52.235 { 00:22:52.235 "method": "nvmf_subsystem_add_ns", 00:22:52.235 "params": { 00:22:52.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.235 "namespace": { 00:22:52.235 "nsid": 1, 00:22:52.235 "bdev_name": "malloc0", 00:22:52.235 "nguid": "DE400B5DC9854B8CA574EB12A21B423F", 00:22:52.235 "uuid": "de400b5d-c985-4b8c-a574-eb12a21b423f", 00:22:52.235 "no_auto_visible": false 00:22:52.235 } 00:22:52.235 } 00:22:52.235 }, 00:22:52.235 { 00:22:52.235 "method": "nvmf_subsystem_add_listener", 00:22:52.235 "params": { 00:22:52.235 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.235 "listen_address": { 00:22:52.235 "trtype": "TCP", 00:22:52.235 "adrfam": "IPv4", 00:22:52.235 "traddr": "10.0.0.2", 00:22:52.235 "trsvcid": "4420" 00:22:52.235 }, 00:22:52.235 "secure_channel": true 00:22:52.235 } 00:22:52.235 } 00:22:52.235 ] 00:22:52.235 } 00:22:52.235 ] 00:22:52.235 }' 00:22:52.235 00:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:52.495 00:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:52.495 "subsystems": [ 00:22:52.495 { 00:22:52.495 "subsystem": "keyring", 00:22:52.495 "config": [] 00:22:52.495 }, 00:22:52.495 { 00:22:52.495 "subsystem": "iobuf", 00:22:52.495 "config": [ 00:22:52.495 { 00:22:52.495 "method": "iobuf_set_options", 00:22:52.495 "params": { 00:22:52.495 "small_pool_count": 8192, 00:22:52.495 "large_pool_count": 1024, 00:22:52.495 "small_bufsize": 8192, 00:22:52.495 "large_bufsize": 135168 00:22:52.495 } 00:22:52.495 } 00:22:52.495 ] 00:22:52.495 }, 00:22:52.495 { 00:22:52.495 "subsystem": "sock", 00:22:52.495 "config": [ 00:22:52.495 { 00:22:52.495 "method": "sock_impl_set_options", 00:22:52.495 "params": { 00:22:52.495 "impl_name": "posix", 00:22:52.495 "recv_buf_size": 2097152, 00:22:52.495 "send_buf_size": 2097152, 00:22:52.495 "enable_recv_pipe": true, 00:22:52.495 "enable_quickack": false, 00:22:52.495 "enable_placement_id": 0, 00:22:52.495 "enable_zerocopy_send_server": true, 00:22:52.495 "enable_zerocopy_send_client": false, 00:22:52.495 "zerocopy_threshold": 0, 00:22:52.495 "tls_version": 0, 00:22:52.495 "enable_ktls": false 00:22:52.495 } 00:22:52.495 }, 00:22:52.495 { 00:22:52.495 "method": "sock_impl_set_options", 00:22:52.495 "params": { 00:22:52.495 "impl_name": "ssl", 00:22:52.495 "recv_buf_size": 4096, 00:22:52.495 "send_buf_size": 4096, 00:22:52.495 "enable_recv_pipe": true, 00:22:52.495 "enable_quickack": false, 00:22:52.495 "enable_placement_id": 0, 00:22:52.495 "enable_zerocopy_send_server": true, 00:22:52.495 "enable_zerocopy_send_client": false, 00:22:52.495 "zerocopy_threshold": 0, 00:22:52.495 "tls_version": 0, 00:22:52.495 "enable_ktls": false 00:22:52.495 } 00:22:52.495 } 00:22:52.495 ] 00:22:52.495 }, 00:22:52.495 { 00:22:52.495 "subsystem": "vmd", 00:22:52.495 "config": [] 00:22:52.495 }, 00:22:52.495 { 00:22:52.495 "subsystem": "accel", 00:22:52.495 "config": [ 00:22:52.495 { 00:22:52.495 "method": "accel_set_options", 00:22:52.495 "params": { 00:22:52.495 "small_cache_size": 128, 00:22:52.495 "large_cache_size": 16, 00:22:52.495 "task_count": 2048, 00:22:52.495 "sequence_count": 2048, 00:22:52.495 "buf_count": 2048 00:22:52.495 } 00:22:52.495 } 00:22:52.495 ] 00:22:52.495 }, 00:22:52.496 { 00:22:52.496 "subsystem": "bdev", 00:22:52.496 "config": [ 00:22:52.496 { 00:22:52.496 "method": "bdev_set_options", 00:22:52.496 "params": { 00:22:52.496 "bdev_io_pool_size": 65535, 00:22:52.496 "bdev_io_cache_size": 256, 00:22:52.496 "bdev_auto_examine": true, 00:22:52.496 "iobuf_small_cache_size": 128, 00:22:52.496 "iobuf_large_cache_size": 16 00:22:52.496 } 00:22:52.496 }, 00:22:52.496 { 00:22:52.496 "method": "bdev_raid_set_options", 00:22:52.496 "params": { 00:22:52.496 "process_window_size_kb": 1024 00:22:52.496 } 00:22:52.496 }, 00:22:52.496 { 00:22:52.496 "method": "bdev_iscsi_set_options", 00:22:52.496 "params": { 00:22:52.496 "timeout_sec": 30 00:22:52.496 } 00:22:52.496 }, 00:22:52.496 { 00:22:52.496 "method": "bdev_nvme_set_options", 00:22:52.496 "params": { 00:22:52.496 "action_on_timeout": "none", 00:22:52.496 "timeout_us": 0, 00:22:52.496 "timeout_admin_us": 0, 00:22:52.496 "keep_alive_timeout_ms": 10000, 00:22:52.496 "arbitration_burst": 0, 00:22:52.496 "low_priority_weight": 0, 00:22:52.496 "medium_priority_weight": 0, 00:22:52.496 "high_priority_weight": 0, 00:22:52.496 "nvme_adminq_poll_period_us": 10000, 00:22:52.496 "nvme_ioq_poll_period_us": 0, 00:22:52.496 "io_queue_requests": 512, 00:22:52.496 "delay_cmd_submit": true, 00:22:52.496 "transport_retry_count": 4, 00:22:52.496 "bdev_retry_count": 3, 00:22:52.496 "transport_ack_timeout": 0, 00:22:52.496 "ctrlr_loss_timeout_sec": 0, 00:22:52.496 "reconnect_delay_sec": 0, 00:22:52.496 "fast_io_fail_timeout_sec": 0, 00:22:52.496 "disable_auto_failback": false, 00:22:52.496 "generate_uuids": false, 00:22:52.496 "transport_tos": 0, 00:22:52.496 "nvme_error_stat": false, 00:22:52.496 "rdma_srq_size": 0, 00:22:52.496 "io_path_stat": false, 00:22:52.496 "allow_accel_sequence": false, 00:22:52.496 "rdma_max_cq_size": 0, 00:22:52.496 "rdma_cm_event_timeout_ms": 0, 00:22:52.496 "dhchap_digests": [ 00:22:52.496 "sha256", 00:22:52.496 "sha384", 00:22:52.496 "sha512" 00:22:52.496 ], 00:22:52.496 "dhchap_dhgroups": [ 00:22:52.496 "null", 00:22:52.496 "ffdhe2048", 00:22:52.496 "ffdhe3072", 00:22:52.496 "ffdhe4096", 00:22:52.496 "ffdhe6144", 00:22:52.496 "ffdhe8192" 00:22:52.496 ] 00:22:52.496 } 00:22:52.496 }, 00:22:52.496 { 00:22:52.496 "method": "bdev_nvme_attach_controller", 00:22:52.496 "params": { 00:22:52.496 "name": "TLSTEST", 00:22:52.496 "trtype": "TCP", 00:22:52.496 "adrfam": "IPv4", 00:22:52.496 "traddr": "10.0.0.2", 00:22:52.496 "trsvcid": "4420", 00:22:52.496 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.496 "prchk_reftag": false, 00:22:52.496 "prchk_guard": false, 00:22:52.496 "ctrlr_loss_timeout_sec": 0, 00:22:52.496 "reconnect_delay_sec": 0, 00:22:52.496 "fast_io_fail_timeout_sec": 0, 00:22:52.496 "psk": "/tmp/tmp.yZEmyQR5mU", 00:22:52.496 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:52.496 "hdgst": false, 00:22:52.496 "ddgst": false 00:22:52.496 } 00:22:52.496 }, 00:22:52.496 { 00:22:52.496 "method": "bdev_nvme_set_hotplug", 00:22:52.496 "params": { 00:22:52.496 "period_us": 100000, 00:22:52.496 "enable": false 00:22:52.496 } 00:22:52.496 }, 00:22:52.496 { 00:22:52.496 "method": "bdev_wait_for_examine" 00:22:52.496 } 00:22:52.496 ] 00:22:52.496 }, 00:22:52.496 { 00:22:52.496 "subsystem": "nbd", 00:22:52.496 "config": [] 00:22:52.496 } 00:22:52.496 ] 00:22:52.496 }' 00:22:52.496 00:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 2063833 00:22:52.496 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2063833 ']' 00:22:52.496 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2063833 00:22:52.496 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:52.496 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:52.496 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2063833 00:22:52.496 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:22:52.496 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:22:52.496 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2063833' 00:22:52.496 killing process with pid 2063833 00:22:52.496 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2063833 00:22:52.496 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.496 00:22:52.496 Latency(us) 00:22:52.496 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.496 =================================================================================================================== 00:22:52.496 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:52.496 [2024-05-15 00:38:18.542809] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:52.496 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2063833 00:22:52.756 00:38:18 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 2063501 00:22:52.756 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2063501 ']' 00:22:52.756 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2063501 00:22:52.756 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:22:52.756 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:22:52.756 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2063501 00:22:53.014 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:22:53.014 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:22:53.014 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2063501' 00:22:53.014 killing process with pid 2063501 00:22:53.014 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2063501 00:22:53.014 [2024-05-15 00:38:18.953370] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:22:53.014 [2024-05-15 00:38:18.953431] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:53.014 00:38:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2063501 00:22:53.273 00:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:53.273 00:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:53.273 00:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:22:53.273 00:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.273 00:38:19 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:53.273 "subsystems": [ 00:22:53.273 { 00:22:53.273 "subsystem": "keyring", 00:22:53.273 "config": [] 00:22:53.273 }, 00:22:53.273 { 00:22:53.273 "subsystem": "iobuf", 00:22:53.273 "config": [ 00:22:53.273 { 00:22:53.273 "method": "iobuf_set_options", 00:22:53.273 "params": { 00:22:53.273 "small_pool_count": 8192, 00:22:53.273 "large_pool_count": 1024, 00:22:53.273 "small_bufsize": 8192, 00:22:53.273 "large_bufsize": 135168 00:22:53.273 } 00:22:53.273 } 00:22:53.273 ] 00:22:53.273 }, 00:22:53.273 { 00:22:53.273 "subsystem": "sock", 00:22:53.273 "config": [ 00:22:53.273 { 00:22:53.273 "method": "sock_impl_set_options", 00:22:53.273 "params": { 00:22:53.273 "impl_name": "posix", 00:22:53.273 "recv_buf_size": 2097152, 00:22:53.273 "send_buf_size": 2097152, 00:22:53.273 "enable_recv_pipe": true, 00:22:53.273 "enable_quickack": false, 00:22:53.273 "enable_placement_id": 0, 00:22:53.273 "enable_zerocopy_send_server": true, 00:22:53.273 "enable_zerocopy_send_client": false, 00:22:53.273 "zerocopy_threshold": 0, 00:22:53.273 "tls_version": 0, 00:22:53.273 "enable_ktls": false 00:22:53.273 } 00:22:53.273 }, 00:22:53.273 { 00:22:53.273 "method": "sock_impl_set_options", 00:22:53.273 "params": { 00:22:53.273 "impl_name": "ssl", 00:22:53.273 "recv_buf_size": 4096, 00:22:53.273 "send_buf_size": 4096, 00:22:53.273 "enable_recv_pipe": true, 00:22:53.273 "enable_quickack": false, 00:22:53.273 "enable_placement_id": 0, 00:22:53.273 "enable_zerocopy_send_server": true, 00:22:53.273 "enable_zerocopy_send_client": false, 00:22:53.273 "zerocopy_threshold": 0, 00:22:53.273 "tls_version": 0, 00:22:53.273 "enable_ktls": false 00:22:53.273 } 00:22:53.273 } 00:22:53.273 ] 00:22:53.273 }, 00:22:53.273 { 00:22:53.273 "subsystem": "vmd", 00:22:53.273 "config": [] 00:22:53.273 }, 00:22:53.273 { 00:22:53.273 "subsystem": "accel", 00:22:53.273 "config": [ 00:22:53.273 { 00:22:53.273 "method": "accel_set_options", 00:22:53.273 "params": { 00:22:53.273 "small_cache_size": 128, 00:22:53.273 "large_cache_size": 16, 00:22:53.273 "task_count": 2048, 00:22:53.273 "sequence_count": 2048, 00:22:53.273 "buf_count": 2048 00:22:53.273 } 00:22:53.273 } 00:22:53.273 ] 00:22:53.273 }, 00:22:53.273 { 00:22:53.273 "subsystem": "bdev", 00:22:53.273 "config": [ 00:22:53.273 { 00:22:53.273 "method": "bdev_set_options", 00:22:53.273 "params": { 00:22:53.273 "bdev_io_pool_size": 65535, 00:22:53.273 "bdev_io_cache_size": 256, 00:22:53.273 "bdev_auto_examine": true, 00:22:53.273 "iobuf_small_cache_size": 128, 00:22:53.273 "iobuf_large_cache_size": 16 00:22:53.273 } 00:22:53.273 }, 00:22:53.273 { 00:22:53.273 "method": "bdev_raid_set_options", 00:22:53.273 "params": { 00:22:53.273 "process_window_size_kb": 1024 00:22:53.273 } 00:22:53.273 }, 00:22:53.273 { 00:22:53.273 "method": "bdev_iscsi_set_options", 00:22:53.273 "params": { 00:22:53.273 "timeout_sec": 30 00:22:53.273 } 00:22:53.273 }, 00:22:53.273 { 00:22:53.273 "method": "bdev_nvme_set_options", 00:22:53.273 "params": { 00:22:53.273 "action_on_timeout": "none", 00:22:53.273 "timeout_us": 0, 00:22:53.273 "timeout_admin_us": 0, 00:22:53.273 "keep_alive_timeout_ms": 10000, 00:22:53.273 "arbitration_burst": 0, 00:22:53.273 "low_priority_weight": 0, 00:22:53.273 "medium_priority_weight": 0, 00:22:53.273 "high_priority_weight": 0, 00:22:53.273 "nvme_adminq_poll_period_us": 10000, 00:22:53.273 "nvme_ioq_poll_period_us": 0, 00:22:53.273 "io_queue_requests": 0, 00:22:53.273 "delay_cmd_submit": true, 00:22:53.273 "transport_retry_count": 4, 00:22:53.273 "bdev_retry_count": 3, 00:22:53.273 "transport_ack_timeout": 0, 00:22:53.273 "ctrlr_loss_timeout_sec": 0, 00:22:53.273 "reconnect_delay_sec": 0, 00:22:53.273 "fast_io_fail_timeout_sec": 0, 00:22:53.273 "disable_auto_failback": false, 00:22:53.273 "generate_uuids": false, 00:22:53.273 "transport_tos": 0, 00:22:53.273 "nvme_error_stat": false, 00:22:53.273 "rdma_srq_size": 0, 00:22:53.273 "io_path_stat": false, 00:22:53.273 "allow_accel_sequence": false, 00:22:53.273 "rdma_max_cq_size": 0, 00:22:53.273 "rdma_cm_event_timeout_ms": 0, 00:22:53.273 "dhchap_digests": [ 00:22:53.273 "sha256", 00:22:53.273 "sha384", 00:22:53.273 "sha512" 00:22:53.273 ], 00:22:53.273 "dhchap_dhgroups": [ 00:22:53.273 "null", 00:22:53.273 "ffdhe2048", 00:22:53.273 "ffdhe3072", 00:22:53.273 "ffdhe4096", 00:22:53.273 "ffdhe6144", 00:22:53.273 "ffdhe8192" 00:22:53.273 ] 00:22:53.273 } 00:22:53.273 }, 00:22:53.273 { 00:22:53.273 "method": "bdev_nvme_set_hotplug", 00:22:53.273 "params": { 00:22:53.273 "period_us": 100000, 00:22:53.273 "enable": false 00:22:53.273 } 00:22:53.273 }, 00:22:53.273 { 00:22:53.273 "method": "bdev_malloc_create", 00:22:53.273 "params": { 00:22:53.273 "name": "malloc0", 00:22:53.273 "num_blocks": 8192, 00:22:53.273 "block_size": 4096, 00:22:53.273 "physical_block_size": 4096, 00:22:53.273 "uuid": "de400b5d-c985-4b8c-a574-eb12a21b423f", 00:22:53.273 "optimal_io_boundary": 0 00:22:53.273 } 00:22:53.273 }, 00:22:53.274 { 00:22:53.274 "method": "bdev_wait_for_examine" 00:22:53.274 } 00:22:53.274 ] 00:22:53.274 }, 00:22:53.274 { 00:22:53.274 "subsystem": "nbd", 00:22:53.274 "config": [] 00:22:53.274 }, 00:22:53.274 { 00:22:53.274 "subsystem": "scheduler", 00:22:53.274 "config": [ 00:22:53.274 { 00:22:53.274 "method": "framework_set_scheduler", 00:22:53.274 "params": { 00:22:53.274 "name": "static" 00:22:53.274 } 00:22:53.274 } 00:22:53.274 ] 00:22:53.274 }, 00:22:53.274 { 00:22:53.274 "subsystem": "nvmf", 00:22:53.274 "config": [ 00:22:53.274 { 00:22:53.274 "method": "nvmf_set_config", 00:22:53.274 "params": { 00:22:53.274 "discovery_filter": "match_any", 00:22:53.274 "admin_cmd_passthru": { 00:22:53.274 "identify_ctrlr": false 00:22:53.274 } 00:22:53.274 } 00:22:53.274 }, 00:22:53.274 { 00:22:53.274 "method": "nvmf_set_max_subsystems", 00:22:53.274 "params": { 00:22:53.274 "max_subsystems": 1024 00:22:53.274 } 00:22:53.274 }, 00:22:53.274 { 00:22:53.274 "method": "nvmf_set_crdt", 00:22:53.274 "params": { 00:22:53.274 "crdt1": 0, 00:22:53.274 "crdt2": 0, 00:22:53.274 "crdt3": 0 00:22:53.274 } 00:22:53.274 }, 00:22:53.274 { 00:22:53.274 "method": "nvmf_create_transport", 00:22:53.274 "params": { 00:22:53.274 "trtype": "TCP", 00:22:53.274 "max_queue_depth": 128, 00:22:53.274 "max_io_qpairs_per_ctrlr": 127, 00:22:53.274 "in_capsule_data_size": 4096, 00:22:53.274 "max_io_size": 131072, 00:22:53.274 "io_unit_size": 131072, 00:22:53.274 "max_aq_depth": 128, 00:22:53.274 "num_shared_buffers": 511, 00:22:53.274 "buf_cache_size": 4294967295, 00:22:53.274 "dif_insert_or_strip": false, 00:22:53.274 "zcopy": false, 00:22:53.274 "c2h_success": false, 00:22:53.274 "sock_priority": 0, 00:22:53.274 "abort_timeout_sec": 1, 00:22:53.274 "ack_timeout": 0, 00:22:53.274 "data_wr_pool_size": 0 00:22:53.274 } 00:22:53.274 }, 00:22:53.274 { 00:22:53.274 "method": "nvmf_create_subsystem", 00:22:53.274 "params": { 00:22:53.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.274 "allow_any_host": false, 00:22:53.274 "serial_number": "SPDK00000000000001", 00:22:53.274 "model_number": "SPDK bdev Controller", 00:22:53.274 "max_namespaces": 10, 00:22:53.274 "min_cntlid": 1, 00:22:53.274 "max_cntlid": 65519, 00:22:53.274 "ana_reporting": false 00:22:53.274 } 00:22:53.274 }, 00:22:53.274 { 00:22:53.274 "method": "nvmf_subsystem_add_host", 00:22:53.274 "params": { 00:22:53.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.274 "host": "nqn.2016-06.io.spdk:host1", 00:22:53.274 "psk": "/tmp/tmp.yZEmyQR5mU" 00:22:53.274 } 00:22:53.274 }, 00:22:53.274 { 00:22:53.274 "method": "nvmf_subsystem_add_ns", 00:22:53.274 "params": { 00:22:53.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.274 "namespace": { 00:22:53.274 "nsid": 1, 00:22:53.274 "bdev_name": "malloc0", 00:22:53.274 "nguid": "DE400B5DC9854B8CA574EB12A21B423F", 00:22:53.274 "uuid": "de400b5d-c985-4b8c-a574-eb12a21b423f", 00:22:53.274 "no_auto_visible": false 00:22:53.274 } 00:22:53.274 } 00:22:53.274 }, 00:22:53.274 { 00:22:53.274 "method": "nvmf_subsystem_add_listener", 00:22:53.274 "params": { 00:22:53.274 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.274 "listen_address": { 00:22:53.274 "trtype": "TCP", 00:22:53.274 "adrfam": "IPv4", 00:22:53.274 "traddr": "10.0.0.2", 00:22:53.274 "trsvcid": "4420" 00:22:53.274 }, 00:22:53.274 "secure_channel": true 00:22:53.274 } 00:22:53.274 } 00:22:53.274 ] 00:22:53.274 } 00:22:53.274 ] 00:22:53.274 }' 00:22:53.532 00:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2064164 00:22:53.532 00:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2064164 00:22:53.532 00:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2064164 ']' 00:22:53.532 00:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.532 00:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:53.532 00:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.532 00:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:53.532 00:38:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.532 00:38:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:53.532 [2024-05-15 00:38:19.519157] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:53.532 [2024-05-15 00:38:19.519263] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.532 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.532 [2024-05-15 00:38:19.644135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.790 [2024-05-15 00:38:19.741326] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.790 [2024-05-15 00:38:19.741363] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.790 [2024-05-15 00:38:19.741373] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.790 [2024-05-15 00:38:19.741383] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.790 [2024-05-15 00:38:19.741390] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.790 [2024-05-15 00:38:19.741471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.050 [2024-05-15 00:38:20.043234] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.050 [2024-05-15 00:38:20.059179] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:54.050 [2024-05-15 00:38:20.075165] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:22:54.050 [2024-05-15 00:38:20.075237] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:54.050 [2024-05-15 00:38:20.075438] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2064472 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2064472 /var/tmp/bdevperf.sock 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2064472 ']' 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.311 00:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:22:54.311 "subsystems": [ 00:22:54.311 { 00:22:54.311 "subsystem": "keyring", 00:22:54.311 "config": [] 00:22:54.311 }, 00:22:54.311 { 00:22:54.311 "subsystem": "iobuf", 00:22:54.311 "config": [ 00:22:54.311 { 00:22:54.311 "method": "iobuf_set_options", 00:22:54.311 "params": { 00:22:54.311 "small_pool_count": 8192, 00:22:54.311 "large_pool_count": 1024, 00:22:54.311 "small_bufsize": 8192, 00:22:54.311 "large_bufsize": 135168 00:22:54.311 } 00:22:54.311 } 00:22:54.311 ] 00:22:54.311 }, 00:22:54.311 { 00:22:54.311 "subsystem": "sock", 00:22:54.311 "config": [ 00:22:54.311 { 00:22:54.311 "method": "sock_impl_set_options", 00:22:54.311 "params": { 00:22:54.311 "impl_name": "posix", 00:22:54.311 "recv_buf_size": 2097152, 00:22:54.311 "send_buf_size": 2097152, 00:22:54.311 "enable_recv_pipe": true, 00:22:54.311 "enable_quickack": false, 00:22:54.311 "enable_placement_id": 0, 00:22:54.311 "enable_zerocopy_send_server": true, 00:22:54.311 "enable_zerocopy_send_client": false, 00:22:54.311 "zerocopy_threshold": 0, 00:22:54.311 "tls_version": 0, 00:22:54.311 "enable_ktls": false 00:22:54.311 } 00:22:54.311 }, 00:22:54.311 { 00:22:54.311 "method": "sock_impl_set_options", 00:22:54.311 "params": { 00:22:54.311 "impl_name": "ssl", 00:22:54.311 "recv_buf_size": 4096, 00:22:54.311 "send_buf_size": 4096, 00:22:54.311 "enable_recv_pipe": true, 00:22:54.311 "enable_quickack": false, 00:22:54.311 "enable_placement_id": 0, 00:22:54.311 "enable_zerocopy_send_server": true, 00:22:54.311 "enable_zerocopy_send_client": false, 00:22:54.311 "zerocopy_threshold": 0, 00:22:54.311 "tls_version": 0, 00:22:54.311 "enable_ktls": false 00:22:54.311 } 00:22:54.311 } 00:22:54.311 ] 00:22:54.311 }, 00:22:54.311 { 00:22:54.311 "subsystem": "vmd", 00:22:54.311 "config": [] 00:22:54.311 }, 00:22:54.311 { 00:22:54.311 "subsystem": "accel", 00:22:54.311 "config": [ 00:22:54.311 { 00:22:54.311 "method": "accel_set_options", 00:22:54.311 "params": { 00:22:54.311 "small_cache_size": 128, 00:22:54.311 "large_cache_size": 16, 00:22:54.311 "task_count": 2048, 00:22:54.311 "sequence_count": 2048, 00:22:54.311 "buf_count": 2048 00:22:54.311 } 00:22:54.311 } 00:22:54.311 ] 00:22:54.311 }, 00:22:54.311 { 00:22:54.311 "subsystem": "bdev", 00:22:54.311 "config": [ 00:22:54.311 { 00:22:54.311 "method": "bdev_set_options", 00:22:54.311 "params": { 00:22:54.311 "bdev_io_pool_size": 65535, 00:22:54.311 "bdev_io_cache_size": 256, 00:22:54.311 "bdev_auto_examine": true, 00:22:54.311 "iobuf_small_cache_size": 128, 00:22:54.311 "iobuf_large_cache_size": 16 00:22:54.311 } 00:22:54.311 }, 00:22:54.311 { 00:22:54.311 "method": "bdev_raid_set_options", 00:22:54.311 "params": { 00:22:54.311 "process_window_size_kb": 1024 00:22:54.311 } 00:22:54.311 }, 00:22:54.311 { 00:22:54.311 "method": "bdev_iscsi_set_options", 00:22:54.311 "params": { 00:22:54.311 "timeout_sec": 30 00:22:54.311 } 00:22:54.311 }, 00:22:54.311 { 00:22:54.311 "method": "bdev_nvme_set_options", 00:22:54.311 "params": { 00:22:54.311 "action_on_timeout": "none", 00:22:54.311 "timeout_us": 0, 00:22:54.311 "timeout_admin_us": 0, 00:22:54.311 "keep_alive_timeout_ms": 10000, 00:22:54.311 "arbitration_burst": 0, 00:22:54.311 "low_priority_weight": 0, 00:22:54.311 "medium_priority_weight": 0, 00:22:54.311 "high_priority_weight": 0, 00:22:54.311 "nvme_adminq_poll_period_us": 10000, 00:22:54.311 "nvme_ioq_poll_period_us": 0, 00:22:54.311 "io_queue_requests": 512, 00:22:54.311 "delay_cmd_submit": true, 00:22:54.311 "transport_retry_count": 4, 00:22:54.311 "bdev_retry_count": 3, 00:22:54.311 "transport_ack_timeout": 0, 00:22:54.311 "ctrlr_loss_timeout_sec": 0, 00:22:54.311 "reconnect_delay_sec": 0, 00:22:54.311 "fast_io_fail_timeout_sec": 0, 00:22:54.311 "disable_auto_failback": false, 00:22:54.311 "generate_uuids": false, 00:22:54.311 "transport_tos": 0, 00:22:54.311 "nvme_error_stat": false, 00:22:54.311 "rdma_srq_size": 0, 00:22:54.311 "io_path_stat": false, 00:22:54.311 "allow_accel_sequence": false, 00:22:54.311 "rdma_max_cq_size": 0, 00:22:54.311 "rdma_cm_event_timeout_ms": 0, 00:22:54.311 "dhchap_digests": [ 00:22:54.311 "sha256", 00:22:54.311 "sha384", 00:22:54.311 "sha512" 00:22:54.311 ], 00:22:54.311 "dhchap_dhgroups": [ 00:22:54.311 "null", 00:22:54.311 "ffdhe2048", 00:22:54.311 "ffdhe3072", 00:22:54.311 "ffdhe4096", 00:22:54.311 "ffdhe6144", 00:22:54.311 "ffdhe8192" 00:22:54.311 ] 00:22:54.311 } 00:22:54.311 }, 00:22:54.311 { 00:22:54.311 "method": "bdev_nvme_attach_controller", 00:22:54.311 "params": { 00:22:54.311 "name": "TLSTEST", 00:22:54.311 "trtype": "TCP", 00:22:54.311 "adrfam": "IPv4", 00:22:54.311 "traddr": "10.0.0.2", 00:22:54.311 "trsvcid": "4420", 00:22:54.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:54.311 "prchk_reftag": false, 00:22:54.311 "prchk_guard": false, 00:22:54.311 "ctrlr_loss_timeout_sec": 0, 00:22:54.311 "reconnect_delay_sec": 0, 00:22:54.311 "fast_io_fail_timeout_sec": 0, 00:22:54.311 "psk": "/tmp/tmp.yZEmyQR5mU", 00:22:54.312 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:54.312 "hdgst": false, 00:22:54.312 "ddgst": false 00:22:54.312 } 00:22:54.312 }, 00:22:54.312 { 00:22:54.312 "method": "bdev_nvme_set_hotplug", 00:22:54.312 "params": { 00:22:54.312 "period_us": 100000, 00:22:54.312 "enable": false 00:22:54.312 } 00:22:54.312 }, 00:22:54.312 { 00:22:54.312 "method": "bdev_wait_for_examine" 00:22:54.312 } 00:22:54.312 ] 00:22:54.312 }, 00:22:54.312 { 00:22:54.312 "subsystem": "nbd", 00:22:54.312 "config": [] 00:22:54.312 } 00:22:54.312 ] 00:22:54.312 }' 00:22:54.312 00:38:20 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:54.312 [2024-05-15 00:38:20.340025] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:22:54.312 [2024-05-15 00:38:20.340173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2064472 ] 00:22:54.312 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.312 [2024-05-15 00:38:20.470849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.572 [2024-05-15 00:38:20.568361] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.830 [2024-05-15 00:38:20.769721] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:54.830 [2024-05-15 00:38:20.769828] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:55.088 00:38:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:22:55.088 00:38:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:22:55.088 00:38:21 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:55.088 Running I/O for 10 seconds... 00:23:05.062 00:23:05.062 Latency(us) 00:23:05.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.062 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:05.062 Verification LBA range: start 0x0 length 0x2000 00:23:05.062 TLSTESTn1 : 10.02 5708.52 22.30 0.00 0.00 22386.60 4518.53 40563.33 00:23:05.063 =================================================================================================================== 00:23:05.063 Total : 5708.52 22.30 0.00 0.00 22386.60 4518.53 40563.33 00:23:05.063 0 00:23:05.063 00:38:31 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:05.063 00:38:31 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 2064472 00:23:05.063 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2064472 ']' 00:23:05.063 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2064472 00:23:05.063 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:05.063 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:05.063 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2064472 00:23:05.063 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:05.063 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:05.063 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2064472' 00:23:05.063 killing process with pid 2064472 00:23:05.063 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2064472 00:23:05.063 Received shutdown signal, test time was about 10.000000 seconds 00:23:05.063 00:23:05.063 Latency(us) 00:23:05.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:05.063 =================================================================================================================== 00:23:05.063 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:05.063 [2024-05-15 00:38:31.187754] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:05.063 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2064472 00:23:05.631 00:38:31 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 2064164 00:23:05.631 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2064164 ']' 00:23:05.631 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2064164 00:23:05.631 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:05.631 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:05.631 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2064164 00:23:05.631 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:05.631 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:05.631 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2064164' 00:23:05.631 killing process with pid 2064164 00:23:05.631 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2064164 00:23:05.631 [2024-05-15 00:38:31.627856] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:05.631 00:38:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2064164 00:23:05.631 [2024-05-15 00:38:31.627924] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:06.199 00:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:06.199 00:38:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:06.199 00:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:06.199 00:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.199 00:38:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2066669 00:23:06.199 00:38:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2066669 00:23:06.199 00:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2066669 ']' 00:23:06.199 00:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.199 00:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:06.199 00:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.199 00:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:06.199 00:38:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:06.199 00:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.199 [2024-05-15 00:38:32.258772] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:23:06.199 [2024-05-15 00:38:32.258913] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.199 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.456 [2024-05-15 00:38:32.402434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.456 [2024-05-15 00:38:32.500407] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.456 [2024-05-15 00:38:32.500451] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.456 [2024-05-15 00:38:32.500462] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.456 [2024-05-15 00:38:32.500474] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.456 [2024-05-15 00:38:32.500483] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.456 [2024-05-15 00:38:32.500520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:07.023 00:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:07.023 00:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:07.023 00:38:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.023 00:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:07.023 00:38:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.023 00:38:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.023 00:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.yZEmyQR5mU 00:23:07.023 00:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.yZEmyQR5mU 00:23:07.023 00:38:32 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:07.023 [2024-05-15 00:38:33.090245] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:07.023 00:38:33 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:07.282 00:38:33 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:07.282 [2024-05-15 00:38:33.354261] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:07.282 [2024-05-15 00:38:33.354353] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:07.282 [2024-05-15 00:38:33.354571] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.282 00:38:33 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:07.540 malloc0 00:23:07.540 00:38:33 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:07.540 00:38:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.yZEmyQR5mU 00:23:07.799 [2024-05-15 00:38:33.770146] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:07.799 00:38:33 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2067026 00:23:07.799 00:38:33 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:07.799 00:38:33 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2067026 /var/tmp/bdevperf.sock 00:23:07.799 00:38:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2067026 ']' 00:23:07.799 00:38:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.799 00:38:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:07.799 00:38:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.799 00:38:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:07.799 00:38:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:07.799 00:38:33 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:07.799 [2024-05-15 00:38:33.857318] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:23:07.799 [2024-05-15 00:38:33.857448] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2067026 ] 00:23:07.799 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.057 [2024-05-15 00:38:33.974803] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:08.057 [2024-05-15 00:38:34.066501] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:08.622 00:38:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:08.622 00:38:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:08.622 00:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yZEmyQR5mU 00:23:08.622 00:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:08.622 [2024-05-15 00:38:34.779506] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:08.880 nvme0n1 00:23:08.880 00:38:34 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:08.880 Running I/O for 1 seconds... 00:23:09.815 00:23:09.815 Latency(us) 00:23:09.815 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:09.815 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:09.815 Verification LBA range: start 0x0 length 0x2000 00:23:09.815 nvme0n1 : 1.01 5379.46 21.01 0.00 0.00 23652.69 4259.84 52428.80 00:23:09.815 =================================================================================================================== 00:23:09.815 Total : 5379.46 21.01 0.00 0.00 23652.69 4259.84 52428.80 00:23:09.815 0 00:23:09.815 00:38:35 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2067026 00:23:09.815 00:38:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2067026 ']' 00:23:09.815 00:38:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2067026 00:23:09.815 00:38:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:09.815 00:38:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:09.815 00:38:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2067026 00:23:10.073 00:38:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:10.073 00:38:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:10.073 00:38:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2067026' 00:23:10.073 killing process with pid 2067026 00:23:10.073 00:38:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2067026 00:23:10.073 Received shutdown signal, test time was about 1.000000 seconds 00:23:10.073 00:23:10.073 Latency(us) 00:23:10.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.073 =================================================================================================================== 00:23:10.073 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:10.073 00:38:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2067026 00:23:10.331 00:38:36 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2066669 00:23:10.331 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2066669 ']' 00:23:10.331 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2066669 00:23:10.331 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:10.331 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:10.331 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2066669 00:23:10.331 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:10.331 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:10.331 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2066669' 00:23:10.331 killing process with pid 2066669 00:23:10.331 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2066669 00:23:10.331 [2024-05-15 00:38:36.412604] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:10.331 [2024-05-15 00:38:36.412667] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:10.331 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2066669 00:23:10.898 00:38:36 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:10.898 00:38:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:10.898 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:10.898 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.898 00:38:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2067621 00:23:10.898 00:38:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2067621 00:23:10.898 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2067621 ']' 00:23:10.898 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:10.898 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:10.898 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:10.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:10.898 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:10.898 00:38:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:10.898 00:38:36 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:10.898 [2024-05-15 00:38:37.021366] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:23:10.898 [2024-05-15 00:38:37.021476] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.158 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.158 [2024-05-15 00:38:37.151880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.158 [2024-05-15 00:38:37.251679] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.158 [2024-05-15 00:38:37.251734] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.158 [2024-05-15 00:38:37.251744] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.158 [2024-05-15 00:38:37.251757] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.158 [2024-05-15 00:38:37.251766] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.158 [2024-05-15 00:38:37.251810] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.728 [2024-05-15 00:38:37.774002] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.728 malloc0 00:23:11.728 [2024-05-15 00:38:37.827013] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:11.728 [2024-05-15 00:38:37.827114] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:11.728 [2024-05-15 00:38:37.827353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2067825 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2067825 /var/tmp/bdevperf.sock 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2067825 ']' 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:11.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:11.728 00:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:11.729 00:38:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:11.729 00:38:37 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:11.989 [2024-05-15 00:38:37.929982] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:23:11.989 [2024-05-15 00:38:37.930092] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2067825 ] 00:23:11.989 EAL: No free 2048 kB hugepages reported on node 1 00:23:11.989 [2024-05-15 00:38:38.047136] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.989 [2024-05-15 00:38:38.143531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.556 00:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:12.556 00:38:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:12.556 00:38:38 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yZEmyQR5mU 00:23:12.816 00:38:38 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:12.816 [2024-05-15 00:38:38.902215] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:12.816 nvme0n1 00:23:13.074 00:38:38 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:13.074 Running I/O for 1 seconds... 00:23:14.012 00:23:14.012 Latency(us) 00:23:14.012 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.012 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:14.012 Verification LBA range: start 0x0 length 0x2000 00:23:14.012 nvme0n1 : 1.02 5548.78 21.67 0.00 0.00 22857.09 6760.56 23730.93 00:23:14.012 =================================================================================================================== 00:23:14.012 Total : 5548.78 21.67 0.00 0.00 22857.09 6760.56 23730.93 00:23:14.012 0 00:23:14.012 00:38:40 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:14.012 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:14.012 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.270 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:14.270 00:38:40 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:14.270 "subsystems": [ 00:23:14.270 { 00:23:14.271 "subsystem": "keyring", 00:23:14.271 "config": [ 00:23:14.271 { 00:23:14.271 "method": "keyring_file_add_key", 00:23:14.271 "params": { 00:23:14.271 "name": "key0", 00:23:14.271 "path": "/tmp/tmp.yZEmyQR5mU" 00:23:14.271 } 00:23:14.271 } 00:23:14.271 ] 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "subsystem": "iobuf", 00:23:14.271 "config": [ 00:23:14.271 { 00:23:14.271 "method": "iobuf_set_options", 00:23:14.271 "params": { 00:23:14.271 "small_pool_count": 8192, 00:23:14.271 "large_pool_count": 1024, 00:23:14.271 "small_bufsize": 8192, 00:23:14.271 "large_bufsize": 135168 00:23:14.271 } 00:23:14.271 } 00:23:14.271 ] 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "subsystem": "sock", 00:23:14.271 "config": [ 00:23:14.271 { 00:23:14.271 "method": "sock_impl_set_options", 00:23:14.271 "params": { 00:23:14.271 "impl_name": "posix", 00:23:14.271 "recv_buf_size": 2097152, 00:23:14.271 "send_buf_size": 2097152, 00:23:14.271 "enable_recv_pipe": true, 00:23:14.271 "enable_quickack": false, 00:23:14.271 "enable_placement_id": 0, 00:23:14.271 "enable_zerocopy_send_server": true, 00:23:14.271 "enable_zerocopy_send_client": false, 00:23:14.271 "zerocopy_threshold": 0, 00:23:14.271 "tls_version": 0, 00:23:14.271 "enable_ktls": false 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "sock_impl_set_options", 00:23:14.271 "params": { 00:23:14.271 "impl_name": "ssl", 00:23:14.271 "recv_buf_size": 4096, 00:23:14.271 "send_buf_size": 4096, 00:23:14.271 "enable_recv_pipe": true, 00:23:14.271 "enable_quickack": false, 00:23:14.271 "enable_placement_id": 0, 00:23:14.271 "enable_zerocopy_send_server": true, 00:23:14.271 "enable_zerocopy_send_client": false, 00:23:14.271 "zerocopy_threshold": 0, 00:23:14.271 "tls_version": 0, 00:23:14.271 "enable_ktls": false 00:23:14.271 } 00:23:14.271 } 00:23:14.271 ] 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "subsystem": "vmd", 00:23:14.271 "config": [] 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "subsystem": "accel", 00:23:14.271 "config": [ 00:23:14.271 { 00:23:14.271 "method": "accel_set_options", 00:23:14.271 "params": { 00:23:14.271 "small_cache_size": 128, 00:23:14.271 "large_cache_size": 16, 00:23:14.271 "task_count": 2048, 00:23:14.271 "sequence_count": 2048, 00:23:14.271 "buf_count": 2048 00:23:14.271 } 00:23:14.271 } 00:23:14.271 ] 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "subsystem": "bdev", 00:23:14.271 "config": [ 00:23:14.271 { 00:23:14.271 "method": "bdev_set_options", 00:23:14.271 "params": { 00:23:14.271 "bdev_io_pool_size": 65535, 00:23:14.271 "bdev_io_cache_size": 256, 00:23:14.271 "bdev_auto_examine": true, 00:23:14.271 "iobuf_small_cache_size": 128, 00:23:14.271 "iobuf_large_cache_size": 16 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "bdev_raid_set_options", 00:23:14.271 "params": { 00:23:14.271 "process_window_size_kb": 1024 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "bdev_iscsi_set_options", 00:23:14.271 "params": { 00:23:14.271 "timeout_sec": 30 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "bdev_nvme_set_options", 00:23:14.271 "params": { 00:23:14.271 "action_on_timeout": "none", 00:23:14.271 "timeout_us": 0, 00:23:14.271 "timeout_admin_us": 0, 00:23:14.271 "keep_alive_timeout_ms": 10000, 00:23:14.271 "arbitration_burst": 0, 00:23:14.271 "low_priority_weight": 0, 00:23:14.271 "medium_priority_weight": 0, 00:23:14.271 "high_priority_weight": 0, 00:23:14.271 "nvme_adminq_poll_period_us": 10000, 00:23:14.271 "nvme_ioq_poll_period_us": 0, 00:23:14.271 "io_queue_requests": 0, 00:23:14.271 "delay_cmd_submit": true, 00:23:14.271 "transport_retry_count": 4, 00:23:14.271 "bdev_retry_count": 3, 00:23:14.271 "transport_ack_timeout": 0, 00:23:14.271 "ctrlr_loss_timeout_sec": 0, 00:23:14.271 "reconnect_delay_sec": 0, 00:23:14.271 "fast_io_fail_timeout_sec": 0, 00:23:14.271 "disable_auto_failback": false, 00:23:14.271 "generate_uuids": false, 00:23:14.271 "transport_tos": 0, 00:23:14.271 "nvme_error_stat": false, 00:23:14.271 "rdma_srq_size": 0, 00:23:14.271 "io_path_stat": false, 00:23:14.271 "allow_accel_sequence": false, 00:23:14.271 "rdma_max_cq_size": 0, 00:23:14.271 "rdma_cm_event_timeout_ms": 0, 00:23:14.271 "dhchap_digests": [ 00:23:14.271 "sha256", 00:23:14.271 "sha384", 00:23:14.271 "sha512" 00:23:14.271 ], 00:23:14.271 "dhchap_dhgroups": [ 00:23:14.271 "null", 00:23:14.271 "ffdhe2048", 00:23:14.271 "ffdhe3072", 00:23:14.271 "ffdhe4096", 00:23:14.271 "ffdhe6144", 00:23:14.271 "ffdhe8192" 00:23:14.271 ] 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "bdev_nvme_set_hotplug", 00:23:14.271 "params": { 00:23:14.271 "period_us": 100000, 00:23:14.271 "enable": false 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "bdev_malloc_create", 00:23:14.271 "params": { 00:23:14.271 "name": "malloc0", 00:23:14.271 "num_blocks": 8192, 00:23:14.271 "block_size": 4096, 00:23:14.271 "physical_block_size": 4096, 00:23:14.271 "uuid": "5c0a6ccb-66b2-4484-a0b1-453dd8a45c40", 00:23:14.271 "optimal_io_boundary": 0 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "bdev_wait_for_examine" 00:23:14.271 } 00:23:14.271 ] 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "subsystem": "nbd", 00:23:14.271 "config": [] 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "subsystem": "scheduler", 00:23:14.271 "config": [ 00:23:14.271 { 00:23:14.271 "method": "framework_set_scheduler", 00:23:14.271 "params": { 00:23:14.271 "name": "static" 00:23:14.271 } 00:23:14.271 } 00:23:14.271 ] 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "subsystem": "nvmf", 00:23:14.271 "config": [ 00:23:14.271 { 00:23:14.271 "method": "nvmf_set_config", 00:23:14.271 "params": { 00:23:14.271 "discovery_filter": "match_any", 00:23:14.271 "admin_cmd_passthru": { 00:23:14.271 "identify_ctrlr": false 00:23:14.271 } 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "nvmf_set_max_subsystems", 00:23:14.271 "params": { 00:23:14.271 "max_subsystems": 1024 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "nvmf_set_crdt", 00:23:14.271 "params": { 00:23:14.271 "crdt1": 0, 00:23:14.271 "crdt2": 0, 00:23:14.271 "crdt3": 0 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "nvmf_create_transport", 00:23:14.271 "params": { 00:23:14.271 "trtype": "TCP", 00:23:14.271 "max_queue_depth": 128, 00:23:14.271 "max_io_qpairs_per_ctrlr": 127, 00:23:14.271 "in_capsule_data_size": 4096, 00:23:14.271 "max_io_size": 131072, 00:23:14.271 "io_unit_size": 131072, 00:23:14.271 "max_aq_depth": 128, 00:23:14.271 "num_shared_buffers": 511, 00:23:14.271 "buf_cache_size": 4294967295, 00:23:14.271 "dif_insert_or_strip": false, 00:23:14.271 "zcopy": false, 00:23:14.271 "c2h_success": false, 00:23:14.271 "sock_priority": 0, 00:23:14.271 "abort_timeout_sec": 1, 00:23:14.271 "ack_timeout": 0, 00:23:14.271 "data_wr_pool_size": 0 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "nvmf_create_subsystem", 00:23:14.271 "params": { 00:23:14.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.271 "allow_any_host": false, 00:23:14.271 "serial_number": "00000000000000000000", 00:23:14.271 "model_number": "SPDK bdev Controller", 00:23:14.271 "max_namespaces": 32, 00:23:14.271 "min_cntlid": 1, 00:23:14.271 "max_cntlid": 65519, 00:23:14.271 "ana_reporting": false 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "nvmf_subsystem_add_host", 00:23:14.271 "params": { 00:23:14.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.271 "host": "nqn.2016-06.io.spdk:host1", 00:23:14.271 "psk": "key0" 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "nvmf_subsystem_add_ns", 00:23:14.271 "params": { 00:23:14.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.271 "namespace": { 00:23:14.271 "nsid": 1, 00:23:14.271 "bdev_name": "malloc0", 00:23:14.271 "nguid": "5C0A6CCB66B24484A0B1453DD8A45C40", 00:23:14.271 "uuid": "5c0a6ccb-66b2-4484-a0b1-453dd8a45c40", 00:23:14.271 "no_auto_visible": false 00:23:14.271 } 00:23:14.271 } 00:23:14.271 }, 00:23:14.271 { 00:23:14.271 "method": "nvmf_subsystem_add_listener", 00:23:14.271 "params": { 00:23:14.271 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.271 "listen_address": { 00:23:14.271 "trtype": "TCP", 00:23:14.271 "adrfam": "IPv4", 00:23:14.271 "traddr": "10.0.0.2", 00:23:14.271 "trsvcid": "4420" 00:23:14.271 }, 00:23:14.271 "secure_channel": true 00:23:14.271 } 00:23:14.271 } 00:23:14.271 ] 00:23:14.271 } 00:23:14.271 ] 00:23:14.272 }' 00:23:14.272 00:38:40 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:14.272 00:38:40 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:14.272 "subsystems": [ 00:23:14.272 { 00:23:14.272 "subsystem": "keyring", 00:23:14.272 "config": [ 00:23:14.272 { 00:23:14.272 "method": "keyring_file_add_key", 00:23:14.272 "params": { 00:23:14.272 "name": "key0", 00:23:14.272 "path": "/tmp/tmp.yZEmyQR5mU" 00:23:14.272 } 00:23:14.272 } 00:23:14.272 ] 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "subsystem": "iobuf", 00:23:14.272 "config": [ 00:23:14.272 { 00:23:14.272 "method": "iobuf_set_options", 00:23:14.272 "params": { 00:23:14.272 "small_pool_count": 8192, 00:23:14.272 "large_pool_count": 1024, 00:23:14.272 "small_bufsize": 8192, 00:23:14.272 "large_bufsize": 135168 00:23:14.272 } 00:23:14.272 } 00:23:14.272 ] 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "subsystem": "sock", 00:23:14.272 "config": [ 00:23:14.272 { 00:23:14.272 "method": "sock_impl_set_options", 00:23:14.272 "params": { 00:23:14.272 "impl_name": "posix", 00:23:14.272 "recv_buf_size": 2097152, 00:23:14.272 "send_buf_size": 2097152, 00:23:14.272 "enable_recv_pipe": true, 00:23:14.272 "enable_quickack": false, 00:23:14.272 "enable_placement_id": 0, 00:23:14.272 "enable_zerocopy_send_server": true, 00:23:14.272 "enable_zerocopy_send_client": false, 00:23:14.272 "zerocopy_threshold": 0, 00:23:14.272 "tls_version": 0, 00:23:14.272 "enable_ktls": false 00:23:14.272 } 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "method": "sock_impl_set_options", 00:23:14.272 "params": { 00:23:14.272 "impl_name": "ssl", 00:23:14.272 "recv_buf_size": 4096, 00:23:14.272 "send_buf_size": 4096, 00:23:14.272 "enable_recv_pipe": true, 00:23:14.272 "enable_quickack": false, 00:23:14.272 "enable_placement_id": 0, 00:23:14.272 "enable_zerocopy_send_server": true, 00:23:14.272 "enable_zerocopy_send_client": false, 00:23:14.272 "zerocopy_threshold": 0, 00:23:14.272 "tls_version": 0, 00:23:14.272 "enable_ktls": false 00:23:14.272 } 00:23:14.272 } 00:23:14.272 ] 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "subsystem": "vmd", 00:23:14.272 "config": [] 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "subsystem": "accel", 00:23:14.272 "config": [ 00:23:14.272 { 00:23:14.272 "method": "accel_set_options", 00:23:14.272 "params": { 00:23:14.272 "small_cache_size": 128, 00:23:14.272 "large_cache_size": 16, 00:23:14.272 "task_count": 2048, 00:23:14.272 "sequence_count": 2048, 00:23:14.272 "buf_count": 2048 00:23:14.272 } 00:23:14.272 } 00:23:14.272 ] 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "subsystem": "bdev", 00:23:14.272 "config": [ 00:23:14.272 { 00:23:14.272 "method": "bdev_set_options", 00:23:14.272 "params": { 00:23:14.272 "bdev_io_pool_size": 65535, 00:23:14.272 "bdev_io_cache_size": 256, 00:23:14.272 "bdev_auto_examine": true, 00:23:14.272 "iobuf_small_cache_size": 128, 00:23:14.272 "iobuf_large_cache_size": 16 00:23:14.272 } 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "method": "bdev_raid_set_options", 00:23:14.272 "params": { 00:23:14.272 "process_window_size_kb": 1024 00:23:14.272 } 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "method": "bdev_iscsi_set_options", 00:23:14.272 "params": { 00:23:14.272 "timeout_sec": 30 00:23:14.272 } 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "method": "bdev_nvme_set_options", 00:23:14.272 "params": { 00:23:14.272 "action_on_timeout": "none", 00:23:14.272 "timeout_us": 0, 00:23:14.272 "timeout_admin_us": 0, 00:23:14.272 "keep_alive_timeout_ms": 10000, 00:23:14.272 "arbitration_burst": 0, 00:23:14.272 "low_priority_weight": 0, 00:23:14.272 "medium_priority_weight": 0, 00:23:14.272 "high_priority_weight": 0, 00:23:14.272 "nvme_adminq_poll_period_us": 10000, 00:23:14.272 "nvme_ioq_poll_period_us": 0, 00:23:14.272 "io_queue_requests": 512, 00:23:14.272 "delay_cmd_submit": true, 00:23:14.272 "transport_retry_count": 4, 00:23:14.272 "bdev_retry_count": 3, 00:23:14.272 "transport_ack_timeout": 0, 00:23:14.272 "ctrlr_loss_timeout_sec": 0, 00:23:14.272 "reconnect_delay_sec": 0, 00:23:14.272 "fast_io_fail_timeout_sec": 0, 00:23:14.272 "disable_auto_failback": false, 00:23:14.272 "generate_uuids": false, 00:23:14.272 "transport_tos": 0, 00:23:14.272 "nvme_error_stat": false, 00:23:14.272 "rdma_srq_size": 0, 00:23:14.272 "io_path_stat": false, 00:23:14.272 "allow_accel_sequence": false, 00:23:14.272 "rdma_max_cq_size": 0, 00:23:14.272 "rdma_cm_event_timeout_ms": 0, 00:23:14.272 "dhchap_digests": [ 00:23:14.272 "sha256", 00:23:14.272 "sha384", 00:23:14.272 "sha512" 00:23:14.272 ], 00:23:14.272 "dhchap_dhgroups": [ 00:23:14.272 "null", 00:23:14.272 "ffdhe2048", 00:23:14.272 "ffdhe3072", 00:23:14.272 "ffdhe4096", 00:23:14.272 "ffdhe6144", 00:23:14.272 "ffdhe8192" 00:23:14.272 ] 00:23:14.272 } 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "method": "bdev_nvme_attach_controller", 00:23:14.272 "params": { 00:23:14.272 "name": "nvme0", 00:23:14.272 "trtype": "TCP", 00:23:14.272 "adrfam": "IPv4", 00:23:14.272 "traddr": "10.0.0.2", 00:23:14.272 "trsvcid": "4420", 00:23:14.272 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:14.272 "prchk_reftag": false, 00:23:14.272 "prchk_guard": false, 00:23:14.272 "ctrlr_loss_timeout_sec": 0, 00:23:14.272 "reconnect_delay_sec": 0, 00:23:14.272 "fast_io_fail_timeout_sec": 0, 00:23:14.272 "psk": "key0", 00:23:14.272 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:14.272 "hdgst": false, 00:23:14.272 "ddgst": false 00:23:14.272 } 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "method": "bdev_nvme_set_hotplug", 00:23:14.272 "params": { 00:23:14.272 "period_us": 100000, 00:23:14.272 "enable": false 00:23:14.272 } 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "method": "bdev_enable_histogram", 00:23:14.272 "params": { 00:23:14.272 "name": "nvme0n1", 00:23:14.272 "enable": true 00:23:14.272 } 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "method": "bdev_wait_for_examine" 00:23:14.272 } 00:23:14.272 ] 00:23:14.272 }, 00:23:14.272 { 00:23:14.272 "subsystem": "nbd", 00:23:14.272 "config": [] 00:23:14.272 } 00:23:14.272 ] 00:23:14.272 }' 00:23:14.272 00:38:40 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2067825 00:23:14.272 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2067825 ']' 00:23:14.272 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2067825 00:23:14.272 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:14.272 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:14.272 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2067825 00:23:14.272 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:14.272 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:14.272 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2067825' 00:23:14.272 killing process with pid 2067825 00:23:14.272 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2067825 00:23:14.272 Received shutdown signal, test time was about 1.000000 seconds 00:23:14.272 00:23:14.272 Latency(us) 00:23:14.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.272 =================================================================================================================== 00:23:14.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.272 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2067825 00:23:14.933 00:38:40 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2067621 00:23:14.933 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2067621 ']' 00:23:14.933 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2067621 00:23:14.933 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:14.933 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:14.933 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2067621 00:23:14.933 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:14.933 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:14.933 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2067621' 00:23:14.933 killing process with pid 2067621 00:23:14.933 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2067621 00:23:14.933 00:38:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2067621 00:23:14.933 [2024-05-15 00:38:40.825782] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:15.193 00:38:41 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:15.193 00:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.193 00:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:15.193 00:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.193 00:38:41 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:15.193 "subsystems": [ 00:23:15.193 { 00:23:15.193 "subsystem": "keyring", 00:23:15.193 "config": [ 00:23:15.193 { 00:23:15.193 "method": "keyring_file_add_key", 00:23:15.193 "params": { 00:23:15.193 "name": "key0", 00:23:15.193 "path": "/tmp/tmp.yZEmyQR5mU" 00:23:15.193 } 00:23:15.193 } 00:23:15.193 ] 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "subsystem": "iobuf", 00:23:15.193 "config": [ 00:23:15.193 { 00:23:15.193 "method": "iobuf_set_options", 00:23:15.193 "params": { 00:23:15.193 "small_pool_count": 8192, 00:23:15.193 "large_pool_count": 1024, 00:23:15.193 "small_bufsize": 8192, 00:23:15.193 "large_bufsize": 135168 00:23:15.193 } 00:23:15.193 } 00:23:15.193 ] 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "subsystem": "sock", 00:23:15.193 "config": [ 00:23:15.193 { 00:23:15.193 "method": "sock_impl_set_options", 00:23:15.193 "params": { 00:23:15.193 "impl_name": "posix", 00:23:15.193 "recv_buf_size": 2097152, 00:23:15.193 "send_buf_size": 2097152, 00:23:15.193 "enable_recv_pipe": true, 00:23:15.193 "enable_quickack": false, 00:23:15.193 "enable_placement_id": 0, 00:23:15.193 "enable_zerocopy_send_server": true, 00:23:15.193 "enable_zerocopy_send_client": false, 00:23:15.193 "zerocopy_threshold": 0, 00:23:15.193 "tls_version": 0, 00:23:15.193 "enable_ktls": false 00:23:15.193 } 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "method": "sock_impl_set_options", 00:23:15.193 "params": { 00:23:15.193 "impl_name": "ssl", 00:23:15.193 "recv_buf_size": 4096, 00:23:15.193 "send_buf_size": 4096, 00:23:15.193 "enable_recv_pipe": true, 00:23:15.193 "enable_quickack": false, 00:23:15.193 "enable_placement_id": 0, 00:23:15.193 "enable_zerocopy_send_server": true, 00:23:15.193 "enable_zerocopy_send_client": false, 00:23:15.193 "zerocopy_threshold": 0, 00:23:15.193 "tls_version": 0, 00:23:15.193 "enable_ktls": false 00:23:15.193 } 00:23:15.193 } 00:23:15.193 ] 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "subsystem": "vmd", 00:23:15.193 "config": [] 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "subsystem": "accel", 00:23:15.193 "config": [ 00:23:15.193 { 00:23:15.193 "method": "accel_set_options", 00:23:15.193 "params": { 00:23:15.193 "small_cache_size": 128, 00:23:15.193 "large_cache_size": 16, 00:23:15.193 "task_count": 2048, 00:23:15.193 "sequence_count": 2048, 00:23:15.193 "buf_count": 2048 00:23:15.193 } 00:23:15.193 } 00:23:15.193 ] 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "subsystem": "bdev", 00:23:15.193 "config": [ 00:23:15.193 { 00:23:15.193 "method": "bdev_set_options", 00:23:15.193 "params": { 00:23:15.193 "bdev_io_pool_size": 65535, 00:23:15.193 "bdev_io_cache_size": 256, 00:23:15.193 "bdev_auto_examine": true, 00:23:15.193 "iobuf_small_cache_size": 128, 00:23:15.193 "iobuf_large_cache_size": 16 00:23:15.193 } 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "method": "bdev_raid_set_options", 00:23:15.193 "params": { 00:23:15.193 "process_window_size_kb": 1024 00:23:15.193 } 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "method": "bdev_iscsi_set_options", 00:23:15.193 "params": { 00:23:15.193 "timeout_sec": 30 00:23:15.193 } 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "method": "bdev_nvme_set_options", 00:23:15.193 "params": { 00:23:15.193 "action_on_timeout": "none", 00:23:15.193 "timeout_us": 0, 00:23:15.193 "timeout_admin_us": 0, 00:23:15.193 "keep_alive_timeout_ms": 10000, 00:23:15.193 "arbitration_burst": 0, 00:23:15.193 "low_priority_weight": 0, 00:23:15.193 "medium_priority_weight": 0, 00:23:15.193 "high_priority_weight": 0, 00:23:15.193 "nvme_adminq_poll_period_us": 10000, 00:23:15.193 "nvme_ioq_poll_period_us": 0, 00:23:15.193 "io_queue_requests": 0, 00:23:15.193 "delay_cmd_submit": true, 00:23:15.193 "transport_retry_count": 4, 00:23:15.193 "bdev_retry_count": 3, 00:23:15.193 "transport_ack_timeout": 0, 00:23:15.193 "ctrlr_loss_timeout_sec": 0, 00:23:15.193 "reconnect_delay_sec": 0, 00:23:15.193 "fast_io_fail_timeout_sec": 0, 00:23:15.193 "disable_auto_failback": false, 00:23:15.193 "generate_uuids": false, 00:23:15.193 "transport_tos": 0, 00:23:15.193 "nvme_error_stat": false, 00:23:15.193 "rdma_srq_size": 0, 00:23:15.193 "io_path_stat": false, 00:23:15.193 "allow_accel_sequence": false, 00:23:15.193 "rdma_max_cq_size": 0, 00:23:15.193 "rdma_cm_event_timeout_ms": 0, 00:23:15.193 "dhchap_digests": [ 00:23:15.193 "sha256", 00:23:15.193 "sha384", 00:23:15.193 "sha512" 00:23:15.193 ], 00:23:15.193 "dhchap_dhgroups": [ 00:23:15.193 "null", 00:23:15.193 "ffdhe2048", 00:23:15.193 "ffdhe3072", 00:23:15.193 "ffdhe4096", 00:23:15.193 "ffdhe6144", 00:23:15.193 "ffdhe8192" 00:23:15.193 ] 00:23:15.193 } 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "method": "bdev_nvme_set_hotplug", 00:23:15.193 "params": { 00:23:15.193 "period_us": 100000, 00:23:15.193 "enable": false 00:23:15.193 } 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "method": "bdev_malloc_create", 00:23:15.193 "params": { 00:23:15.193 "name": "malloc0", 00:23:15.193 "num_blocks": 8192, 00:23:15.193 "block_size": 4096, 00:23:15.193 "physical_block_size": 4096, 00:23:15.193 "uuid": "5c0a6ccb-66b2-4484-a0b1-453dd8a45c40", 00:23:15.193 "optimal_io_boundary": 0 00:23:15.193 } 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "method": "bdev_wait_for_examine" 00:23:15.193 } 00:23:15.193 ] 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "subsystem": "nbd", 00:23:15.193 "config": [] 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "subsystem": "scheduler", 00:23:15.193 "config": [ 00:23:15.193 { 00:23:15.193 "method": "framework_set_scheduler", 00:23:15.193 "params": { 00:23:15.193 "name": "static" 00:23:15.193 } 00:23:15.193 } 00:23:15.193 ] 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "subsystem": "nvmf", 00:23:15.193 "config": [ 00:23:15.193 { 00:23:15.193 "method": "nvmf_set_config", 00:23:15.193 "params": { 00:23:15.193 "discovery_filter": "match_any", 00:23:15.193 "admin_cmd_passthru": { 00:23:15.193 "identify_ctrlr": false 00:23:15.193 } 00:23:15.193 } 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "method": "nvmf_set_max_subsystems", 00:23:15.193 "params": { 00:23:15.193 "max_subsystems": 1024 00:23:15.193 } 00:23:15.193 }, 00:23:15.193 { 00:23:15.193 "method": "nvmf_set_crdt", 00:23:15.193 "params": { 00:23:15.193 "crdt1": 0, 00:23:15.193 "crdt2": 0, 00:23:15.194 "crdt3": 0 00:23:15.194 } 00:23:15.194 }, 00:23:15.194 { 00:23:15.194 "method": "nvmf_create_transport", 00:23:15.194 "params": { 00:23:15.194 "trtype": "TCP", 00:23:15.194 "max_queue_depth": 128, 00:23:15.194 "max_io_qpairs_per_ctrlr": 127, 00:23:15.194 "in_capsule_data_size": 4096, 00:23:15.194 "max_io_size": 131072, 00:23:15.194 "io_unit_size": 131072, 00:23:15.194 "max_aq_depth": 128, 00:23:15.194 "num_shared_buffers": 511, 00:23:15.194 "buf_cache_size": 4294967295, 00:23:15.194 "dif_insert_or_strip": false, 00:23:15.194 "zcopy": false, 00:23:15.194 "c2h_success": false, 00:23:15.194 "sock_priority": 0, 00:23:15.194 "abort_timeout_sec": 1, 00:23:15.194 "ack_timeout": 0, 00:23:15.194 "data_wr_pool_size": 0 00:23:15.194 } 00:23:15.194 }, 00:23:15.194 { 00:23:15.194 "method": "nvmf_create_subsystem", 00:23:15.194 "params": { 00:23:15.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.194 "allow_any_host": false, 00:23:15.194 "serial_number": "00000000000000000000", 00:23:15.194 "model_number": "SPDK bdev Controller", 00:23:15.194 "max_namespaces": 32, 00:23:15.194 "min_cntlid": 1, 00:23:15.194 "max_cntlid": 65519, 00:23:15.194 "ana_reporting": false 00:23:15.194 } 00:23:15.194 }, 00:23:15.194 { 00:23:15.194 "method": "nvmf_subsystem_add_host", 00:23:15.194 "params": { 00:23:15.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.194 "host": "nqn.2016-06.io.spdk:host1", 00:23:15.194 "psk": "key0" 00:23:15.194 } 00:23:15.194 }, 00:23:15.194 { 00:23:15.194 "method": "nvmf_subsystem_add_ns", 00:23:15.194 "params": { 00:23:15.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.194 "namespace": { 00:23:15.194 "nsid": 1, 00:23:15.194 "bdev_name": "malloc0", 00:23:15.194 "nguid": "5C0A6CCB66B24484A0B1453DD8A45C40", 00:23:15.194 "uuid": "5c0a6ccb-66b2-4484-a0b1-453dd8a45c40", 00:23:15.194 "no_auto_visible": false 00:23:15.194 } 00:23:15.194 } 00:23:15.194 }, 00:23:15.194 { 00:23:15.194 "method": "nvmf_subsystem_add_listener", 00:23:15.194 "params": { 00:23:15.194 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.194 "listen_address": { 00:23:15.194 "trtype": "TCP", 00:23:15.194 "adrfam": "IPv4", 00:23:15.194 "traddr": "10.0.0.2", 00:23:15.194 "trsvcid": "4420" 00:23:15.194 }, 00:23:15.194 "secure_channel": true 00:23:15.194 } 00:23:15.194 } 00:23:15.194 ] 00:23:15.194 } 00:23:15.194 ] 00:23:15.194 }' 00:23:15.194 00:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2068457 00:23:15.194 00:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2068457 00:23:15.194 00:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2068457 ']' 00:23:15.194 00:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.194 00:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:15.194 00:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.194 00:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:15.194 00:38:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.194 00:38:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:15.453 [2024-05-15 00:38:41.398799] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:23:15.453 [2024-05-15 00:38:41.398907] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.453 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.453 [2024-05-15 00:38:41.519695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.711 [2024-05-15 00:38:41.621037] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.711 [2024-05-15 00:38:41.621079] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.711 [2024-05-15 00:38:41.621089] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.711 [2024-05-15 00:38:41.621100] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.711 [2024-05-15 00:38:41.621110] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.711 [2024-05-15 00:38:41.621202] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.971 [2024-05-15 00:38:41.930761] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.971 [2024-05-15 00:38:41.962682] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:15.971 [2024-05-15 00:38:41.962762] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:15.971 [2024-05-15 00:38:41.962979] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2068754 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2068754 /var/tmp/bdevperf.sock 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@828 -- # '[' -z 2068754 ']' 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:15.971 00:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:15.971 "subsystems": [ 00:23:15.971 { 00:23:15.971 "subsystem": "keyring", 00:23:15.971 "config": [ 00:23:15.971 { 00:23:15.971 "method": "keyring_file_add_key", 00:23:15.971 "params": { 00:23:15.971 "name": "key0", 00:23:15.971 "path": "/tmp/tmp.yZEmyQR5mU" 00:23:15.971 } 00:23:15.971 } 00:23:15.971 ] 00:23:15.971 }, 00:23:15.971 { 00:23:15.971 "subsystem": "iobuf", 00:23:15.971 "config": [ 00:23:15.971 { 00:23:15.971 "method": "iobuf_set_options", 00:23:15.971 "params": { 00:23:15.971 "small_pool_count": 8192, 00:23:15.971 "large_pool_count": 1024, 00:23:15.971 "small_bufsize": 8192, 00:23:15.971 "large_bufsize": 135168 00:23:15.971 } 00:23:15.971 } 00:23:15.971 ] 00:23:15.971 }, 00:23:15.971 { 00:23:15.971 "subsystem": "sock", 00:23:15.971 "config": [ 00:23:15.971 { 00:23:15.971 "method": "sock_impl_set_options", 00:23:15.971 "params": { 00:23:15.971 "impl_name": "posix", 00:23:15.971 "recv_buf_size": 2097152, 00:23:15.971 "send_buf_size": 2097152, 00:23:15.971 "enable_recv_pipe": true, 00:23:15.971 "enable_quickack": false, 00:23:15.971 "enable_placement_id": 0, 00:23:15.971 "enable_zerocopy_send_server": true, 00:23:15.971 "enable_zerocopy_send_client": false, 00:23:15.971 "zerocopy_threshold": 0, 00:23:15.971 "tls_version": 0, 00:23:15.971 "enable_ktls": false 00:23:15.971 } 00:23:15.971 }, 00:23:15.971 { 00:23:15.971 "method": "sock_impl_set_options", 00:23:15.971 "params": { 00:23:15.971 "impl_name": "ssl", 00:23:15.971 "recv_buf_size": 4096, 00:23:15.971 "send_buf_size": 4096, 00:23:15.971 "enable_recv_pipe": true, 00:23:15.971 "enable_quickack": false, 00:23:15.971 "enable_placement_id": 0, 00:23:15.971 "enable_zerocopy_send_server": true, 00:23:15.971 "enable_zerocopy_send_client": false, 00:23:15.971 "zerocopy_threshold": 0, 00:23:15.971 "tls_version": 0, 00:23:15.971 "enable_ktls": false 00:23:15.971 } 00:23:15.971 } 00:23:15.971 ] 00:23:15.971 }, 00:23:15.971 { 00:23:15.971 "subsystem": "vmd", 00:23:15.971 "config": [] 00:23:15.971 }, 00:23:15.971 { 00:23:15.971 "subsystem": "accel", 00:23:15.971 "config": [ 00:23:15.971 { 00:23:15.971 "method": "accel_set_options", 00:23:15.971 "params": { 00:23:15.971 "small_cache_size": 128, 00:23:15.971 "large_cache_size": 16, 00:23:15.971 "task_count": 2048, 00:23:15.971 "sequence_count": 2048, 00:23:15.971 "buf_count": 2048 00:23:15.971 } 00:23:15.971 } 00:23:15.971 ] 00:23:15.971 }, 00:23:15.971 { 00:23:15.971 "subsystem": "bdev", 00:23:15.971 "config": [ 00:23:15.971 { 00:23:15.971 "method": "bdev_set_options", 00:23:15.971 "params": { 00:23:15.971 "bdev_io_pool_size": 65535, 00:23:15.971 "bdev_io_cache_size": 256, 00:23:15.971 "bdev_auto_examine": true, 00:23:15.971 "iobuf_small_cache_size": 128, 00:23:15.971 "iobuf_large_cache_size": 16 00:23:15.971 } 00:23:15.971 }, 00:23:15.971 { 00:23:15.971 "method": "bdev_raid_set_options", 00:23:15.971 "params": { 00:23:15.971 "process_window_size_kb": 1024 00:23:15.971 } 00:23:15.971 }, 00:23:15.971 { 00:23:15.971 "method": "bdev_iscsi_set_options", 00:23:15.971 "params": { 00:23:15.971 "timeout_sec": 30 00:23:15.971 } 00:23:15.971 }, 00:23:15.971 { 00:23:15.971 "method": "bdev_nvme_set_options", 00:23:15.971 "params": { 00:23:15.971 "action_on_timeout": "none", 00:23:15.971 "timeout_us": 0, 00:23:15.971 "timeout_admin_us": 0, 00:23:15.971 "keep_alive_timeout_ms": 10000, 00:23:15.971 "arbitration_burst": 0, 00:23:15.971 "low_priority_weight": 0, 00:23:15.971 "medium_priority_weight": 0, 00:23:15.971 "high_priority_weight": 0, 00:23:15.971 "nvme_adminq_poll_period_us": 10000, 00:23:15.971 "nvme_ioq_poll_period_us": 0, 00:23:15.971 "io_queue_requests": 512, 00:23:15.971 "delay_cmd_submit": true, 00:23:15.971 "transport_retry_count": 4, 00:23:15.971 "bdev_retry_count": 3, 00:23:15.971 "transport_ack_timeout": 0, 00:23:15.971 "ctrlr_loss_timeout_sec": 0, 00:23:15.971 "reconnect_delay_sec": 0, 00:23:15.971 "fast_io_fail_timeout_sec": 0, 00:23:15.971 "disable_auto_failback": false, 00:23:15.971 "generate_uuids": false, 00:23:15.971 "transport_tos": 0, 00:23:15.971 "nvme_error_stat": false, 00:23:15.971 "rdma_srq_size": 0, 00:23:15.971 "io_path_stat": false, 00:23:15.971 "allow_accel_sequence": false, 00:23:15.971 "rdma_max_cq_size": 0, 00:23:15.971 "rdma_cm_event_timeout_ms": 0, 00:23:15.971 "dhchap_digests": [ 00:23:15.971 "sha256", 00:23:15.971 "sha384", 00:23:15.971 "sha512" 00:23:15.971 ], 00:23:15.971 "dhchap_dhgroups": [ 00:23:15.971 "null", 00:23:15.971 "ffdhe2048", 00:23:15.971 "ffdhe3072", 00:23:15.971 "ffdhe4096", 00:23:15.971 "ffdhe6144", 00:23:15.971 "ffdhe8192" 00:23:15.971 ] 00:23:15.971 } 00:23:15.971 }, 00:23:15.971 { 00:23:15.971 "method": "bdev_nvme_attach_controller", 00:23:15.971 "params": { 00:23:15.971 "name": "nvme0", 00:23:15.971 "trtype": "TCP", 00:23:15.971 "adrfam": "IPv4", 00:23:15.971 "traddr": "10.0.0.2", 00:23:15.971 "trsvcid": "4420", 00:23:15.971 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.971 "prchk_reftag": false, 00:23:15.971 "prchk_guard": false, 00:23:15.971 "ctrlr_loss_timeout_sec": 0, 00:23:15.971 "reconnect_delay_sec": 0, 00:23:15.971 "fast_io_fail_timeout_sec": 0, 00:23:15.971 "psk": "key0", 00:23:15.971 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.971 "hdgst": false, 00:23:15.971 "ddgst": false 00:23:15.971 } 00:23:15.971 }, 00:23:15.971 { 00:23:15.971 "method": "bdev_nvme_set_hotplug", 00:23:15.971 "params": { 00:23:15.971 "period_us": 100000, 00:23:15.971 "enable": false 00:23:15.971 } 00:23:15.971 }, 00:23:15.971 { 00:23:15.971 "method": "bdev_enable_histogram", 00:23:15.971 "params": { 00:23:15.971 "name": "nvme0n1", 00:23:15.971 "enable": true 00:23:15.971 } 00:23:15.972 }, 00:23:15.972 { 00:23:15.972 "method": "bdev_wait_for_examine" 00:23:15.972 } 00:23:15.972 ] 00:23:15.972 }, 00:23:15.972 { 00:23:15.972 "subsystem": "nbd", 00:23:15.972 "config": [] 00:23:15.972 } 00:23:15.972 ] 00:23:15.972 }' 00:23:16.230 [2024-05-15 00:38:42.212277] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:23:16.230 [2024-05-15 00:38:42.212417] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2068754 ] 00:23:16.230 EAL: No free 2048 kB hugepages reported on node 1 00:23:16.230 [2024-05-15 00:38:42.343632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.487 [2024-05-15 00:38:42.439726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.745 [2024-05-15 00:38:42.655392] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.003 00:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:17.003 00:38:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@861 -- # return 0 00:23:17.003 00:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:17.003 00:38:42 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:17.003 00:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.003 00:38:43 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:17.003 Running I/O for 1 seconds... 00:23:18.379 00:23:18.379 Latency(us) 00:23:18.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.379 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:18.379 Verification LBA range: start 0x0 length 0x2000 00:23:18.379 nvme0n1 : 1.01 5545.34 21.66 0.00 0.00 22940.89 4346.07 30215.55 00:23:18.379 =================================================================================================================== 00:23:18.379 Total : 5545.34 21.66 0.00 0.00 22940.89 4346.07 30215.55 00:23:18.379 0 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@805 -- # type=--id 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # id=0 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@817 -- # for n in $shm_files 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:18.379 nvmf_trace.0 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@820 -- # return 0 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2068754 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2068754 ']' 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2068754 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2068754 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2068754' 00:23:18.379 killing process with pid 2068754 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2068754 00:23:18.379 Received shutdown signal, test time was about 1.000000 seconds 00:23:18.379 00:23:18.379 Latency(us) 00:23:18.379 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.379 =================================================================================================================== 00:23:18.379 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.379 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2068754 00:23:18.636 00:38:44 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:18.636 00:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:18.636 00:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:18.636 00:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:18.636 00:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:18.636 00:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:18.636 00:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:18.636 rmmod nvme_tcp 00:23:18.636 rmmod nvme_fabrics 00:23:18.636 rmmod nvme_keyring 00:23:18.636 00:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:18.636 00:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:18.637 00:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:18.637 00:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2068457 ']' 00:23:18.637 00:38:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2068457 00:23:18.637 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@947 -- # '[' -z 2068457 ']' 00:23:18.637 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@951 -- # kill -0 2068457 00:23:18.637 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # uname 00:23:18.637 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:18.637 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2068457 00:23:18.637 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:18.637 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:18.637 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2068457' 00:23:18.637 killing process with pid 2068457 00:23:18.637 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # kill 2068457 00:23:18.637 [2024-05-15 00:38:44.757464] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:18.637 00:38:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@971 -- # wait 2068457 00:23:19.202 00:38:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.202 00:38:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.202 00:38:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.202 00:38:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.202 00:38:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.202 00:38:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.203 00:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:19.203 00:38:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.736 00:38:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:21.736 00:38:47 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.gLNcoGeD68 /tmp/tmp.6ygtWbFycK /tmp/tmp.yZEmyQR5mU 00:23:21.736 00:23:21.736 real 1m26.695s 00:23:21.736 user 2m16.871s 00:23:21.736 sys 0m22.779s 00:23:21.736 00:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:21.736 00:38:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.736 ************************************ 00:23:21.736 END TEST nvmf_tls 00:23:21.736 ************************************ 00:23:21.736 00:38:47 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:21.736 00:38:47 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:21.736 00:38:47 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:21.736 00:38:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:21.736 ************************************ 00:23:21.736 START TEST nvmf_fips 00:23:21.736 ************************************ 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:21.736 * Looking for test storage... 00:23:21.736 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:21.736 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@649 -- # local es=0 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@637 -- # local arg=openssl 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # type -t openssl 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # type -P openssl 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # arg=/usr/bin/openssl 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@643 -- # [[ -x /usr/bin/openssl ]] 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # openssl md5 /dev/fd/62 00:23:21.737 Error setting digest 00:23:21.737 00C2627D347F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:21.737 00C2627D347F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@652 -- # es=1 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:21.737 00:38:47 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:27.005 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:27.005 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:27.005 Found net devices under 0000:27:00.0: cvl_0_0 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:27.005 Found net devices under 0000:27:00.1: cvl_0_1 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:27.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:23:27.005 00:23:27.005 --- 10.0.0.2 ping statistics --- 00:23:27.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.005 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.290 ms 00:23:27.005 00:23:27.005 --- 10.0.0.1 ping statistics --- 00:23:27.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.005 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:23:27.005 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2073278 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2073278 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 2073278 ']' 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:27.006 00:38:52 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:27.006 [2024-05-15 00:38:53.065572] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:23:27.006 [2024-05-15 00:38:53.065702] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.006 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.265 [2024-05-15 00:38:53.222807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.265 [2024-05-15 00:38:53.384714] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.265 [2024-05-15 00:38:53.384797] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.265 [2024-05-15 00:38:53.384819] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.265 [2024-05-15 00:38:53.384836] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.265 [2024-05-15 00:38:53.384850] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.265 [2024-05-15 00:38:53.384908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:23:27.830 [2024-05-15 00:38:53.878479] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:27.830 [2024-05-15 00:38:53.894395] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:27.830 [2024-05-15 00:38:53.894504] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:27.830 [2024-05-15 00:38:53.894783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:27.830 [2024-05-15 00:38:53.955079] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:27.830 malloc0 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2073457 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2073457 /var/tmp/bdevperf.sock 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@828 -- # '[' -z 2073457 ']' 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:27.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:27.830 00:38:53 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:28.088 [2024-05-15 00:38:54.076163] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:23:28.088 [2024-05-15 00:38:54.076282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2073457 ] 00:23:28.088 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.088 [2024-05-15 00:38:54.192584] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:28.346 [2024-05-15 00:38:54.289133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.606 00:38:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:28.606 00:38:54 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@861 -- # return 0 00:23:28.606 00:38:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:28.865 [2024-05-15 00:38:54.883857] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:28.865 [2024-05-15 00:38:54.884000] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:28.865 TLSTESTn1 00:23:28.865 00:38:54 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:29.125 Running I/O for 10 seconds... 00:23:39.090 00:23:39.090 Latency(us) 00:23:39.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.090 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:39.090 Verification LBA range: start 0x0 length 0x2000 00:23:39.090 TLSTESTn1 : 10.02 5400.25 21.09 0.00 0.00 23664.53 6726.06 33388.87 00:23:39.090 =================================================================================================================== 00:23:39.090 Total : 5400.25 21.09 0.00 0.00 23664.53 6726.06 33388.87 00:23:39.090 0 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@805 -- # type=--id 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # id=0 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # '[' --id = --pid ']' 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@811 -- # shm_files=nvmf_trace.0 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@813 -- # [[ -z nvmf_trace.0 ]] 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@817 -- # for n in $shm_files 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:39.090 nvmf_trace.0 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@820 -- # return 0 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2073457 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 2073457 ']' 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 2073457 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2073457 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2073457' 00:23:39.090 killing process with pid 2073457 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 2073457 00:23:39.090 Received shutdown signal, test time was about 10.000000 seconds 00:23:39.090 00:23:39.090 Latency(us) 00:23:39.090 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.090 =================================================================================================================== 00:23:39.090 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.090 [2024-05-15 00:39:05.228488] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:39.090 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 2073457 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:39.655 rmmod nvme_tcp 00:23:39.655 rmmod nvme_fabrics 00:23:39.655 rmmod nvme_keyring 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2073278 ']' 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2073278 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@947 -- # '[' -z 2073278 ']' 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@951 -- # kill -0 2073278 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # uname 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2073278 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2073278' 00:23:39.655 killing process with pid 2073278 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # kill 2073278 00:23:39.655 [2024-05-15 00:39:05.724741] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:39.655 [2024-05-15 00:39:05.724794] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:39.655 00:39:05 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@971 -- # wait 2073278 00:23:40.221 00:39:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:40.221 00:39:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:40.221 00:39:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:40.221 00:39:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.221 00:39:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.221 00:39:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.221 00:39:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:40.221 00:39:06 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.752 00:39:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:42.752 00:39:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:42.752 00:23:42.752 real 0m20.934s 00:23:42.752 user 0m24.513s 00:23:42.752 sys 0m7.066s 00:23:42.752 00:39:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:42.752 00:39:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 ************************************ 00:23:42.752 END TEST nvmf_fips 00:23:42.752 ************************************ 00:23:42.752 00:39:08 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:23:42.752 00:39:08 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy-fallback == phy ]] 00:23:42.752 00:39:08 nvmf_tcp -- nvmf/nvmf.sh@85 -- # timing_exit target 00:23:42.752 00:39:08 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:42.752 00:39:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 00:39:08 nvmf_tcp -- nvmf/nvmf.sh@87 -- # timing_enter host 00:23:42.752 00:39:08 nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:42.752 00:39:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 00:39:08 nvmf_tcp -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:23:42.752 00:39:08 nvmf_tcp -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:42.752 00:39:08 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:42.752 00:39:08 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:42.752 00:39:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 ************************************ 00:23:42.752 START TEST nvmf_multicontroller 00:23:42.752 ************************************ 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:42.752 * Looking for test storage... 00:23:42.752 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.752 00:39:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:42.753 00:39:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:48.016 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:48.016 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:48.016 Found net devices under 0000:27:00.0: cvl_0_0 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:48.016 Found net devices under 0000:27:00.1: cvl_0_1 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.016 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:48.017 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.017 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:23:48.017 00:23:48.017 --- 10.0.0.2 ping statistics --- 00:23:48.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.017 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.017 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.017 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:23:48.017 00:23:48.017 --- 10.0.0.1 ping statistics --- 00:23:48.017 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.017 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@721 -- # xtrace_disable 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2080140 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2080140 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 2080140 ']' 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.017 00:39:13 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:48.017 [2024-05-15 00:39:13.953911] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:23:48.017 [2024-05-15 00:39:13.954023] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.017 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.017 [2024-05-15 00:39:14.105445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:48.275 [2024-05-15 00:39:14.267664] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.275 [2024-05-15 00:39:14.267720] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.275 [2024-05-15 00:39:14.267737] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.275 [2024-05-15 00:39:14.267753] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.275 [2024-05-15 00:39:14.267766] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.275 [2024-05-15 00:39:14.267946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.275 [2024-05-15 00:39:14.268059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.275 [2024-05-15 00:39:14.268070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:48.532 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:48.532 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:23:48.532 00:39:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:48.532 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@727 -- # xtrace_disable 00:23:48.532 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.791 [2024-05-15 00:39:14.714421] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.791 Malloc0 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.791 [2024-05-15 00:39:14.816511] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:23:48.791 [2024-05-15 00:39:14.816868] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.791 [2024-05-15 00:39:14.824676] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.791 Malloc1 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2080454 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2080454 /var/tmp/bdevperf.sock 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@828 -- # '[' -z 2080454 ']' 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local max_retries=100 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@837 -- # xtrace_disable 00:23:48.791 00:39:14 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.722 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:23:49.722 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@861 -- # return 0 00:23:49.722 00:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:49.722 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.722 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.982 NVMe0n1 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:49.982 1 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.982 request: 00:23:49.982 { 00:23:49.982 "name": "NVMe0", 00:23:49.982 "trtype": "tcp", 00:23:49.982 "traddr": "10.0.0.2", 00:23:49.982 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:49.982 "hostaddr": "10.0.0.2", 00:23:49.982 "hostsvcid": "60000", 00:23:49.982 "adrfam": "ipv4", 00:23:49.982 "trsvcid": "4420", 00:23:49.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.982 "method": "bdev_nvme_attach_controller", 00:23:49.982 "req_id": 1 00:23:49.982 } 00:23:49.982 Got JSON-RPC error response 00:23:49.982 response: 00:23:49.982 { 00:23:49.982 "code": -114, 00:23:49.982 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:49.982 } 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.982 request: 00:23:49.982 { 00:23:49.982 "name": "NVMe0", 00:23:49.982 "trtype": "tcp", 00:23:49.982 "traddr": "10.0.0.2", 00:23:49.982 "hostaddr": "10.0.0.2", 00:23:49.982 "hostsvcid": "60000", 00:23:49.982 "adrfam": "ipv4", 00:23:49.982 "trsvcid": "4420", 00:23:49.982 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:49.982 "method": "bdev_nvme_attach_controller", 00:23:49.982 "req_id": 1 00:23:49.982 } 00:23:49.982 Got JSON-RPC error response 00:23:49.982 response: 00:23:49.982 { 00:23:49.982 "code": -114, 00:23:49.982 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:49.982 } 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.982 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.982 request: 00:23:49.982 { 00:23:49.982 "name": "NVMe0", 00:23:49.982 "trtype": "tcp", 00:23:49.982 "traddr": "10.0.0.2", 00:23:49.982 "hostaddr": "10.0.0.2", 00:23:49.982 "hostsvcid": "60000", 00:23:49.982 "adrfam": "ipv4", 00:23:49.982 "trsvcid": "4420", 00:23:49.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.982 "multipath": "disable", 00:23:49.982 "method": "bdev_nvme_attach_controller", 00:23:49.982 "req_id": 1 00:23:49.982 } 00:23:49.982 Got JSON-RPC error response 00:23:49.982 response: 00:23:49.982 { 00:23:49.982 "code": -114, 00:23:49.982 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:49.982 } 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@649 -- # local es=0 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:49.983 request: 00:23:49.983 { 00:23:49.983 "name": "NVMe0", 00:23:49.983 "trtype": "tcp", 00:23:49.983 "traddr": "10.0.0.2", 00:23:49.983 "hostaddr": "10.0.0.2", 00:23:49.983 "hostsvcid": "60000", 00:23:49.983 "adrfam": "ipv4", 00:23:49.983 "trsvcid": "4420", 00:23:49.983 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:49.983 "multipath": "failover", 00:23:49.983 "method": "bdev_nvme_attach_controller", 00:23:49.983 "req_id": 1 00:23:49.983 } 00:23:49.983 Got JSON-RPC error response 00:23:49.983 response: 00:23:49.983 { 00:23:49.983 "code": -114, 00:23:49.983 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:49.983 } 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@652 -- # es=1 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:49.983 00:39:15 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.242 00:23:50.242 00:39:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.242 00:39:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:50.242 00:39:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.242 00:39:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.242 00:39:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.242 00:39:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:50.242 00:39:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.242 00:39:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.501 00:23:50.501 00:39:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.501 00:39:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:50.501 00:39:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:50.501 00:39:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:50.501 00:39:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:50.501 00:39:16 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:50.501 00:39:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:50.501 00:39:16 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:51.432 0 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2080454 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 2080454 ']' 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 2080454 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2080454 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2080454' 00:23:51.432 killing process with pid 2080454 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 2080454 00:23:51.432 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 2080454 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@560 -- # xtrace_disable 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # find /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1608 -- # sort -u 00:23:52.001 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1610 -- # cat 00:23:52.001 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:52.001 [2024-05-15 00:39:14.978419] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:23:52.001 [2024-05-15 00:39:14.978537] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2080454 ] 00:23:52.002 EAL: No free 2048 kB hugepages reported on node 1 00:23:52.002 [2024-05-15 00:39:15.092797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.002 [2024-05-15 00:39:15.189225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.002 [2024-05-15 00:39:16.412264] bdev.c:4575:bdev_name_add: *ERROR*: Bdev name a5ab2475-5631-445f-8425-195ed85e7564 already exists 00:23:52.002 [2024-05-15 00:39:16.412308] bdev.c:7691:bdev_register: *ERROR*: Unable to add uuid:a5ab2475-5631-445f-8425-195ed85e7564 alias for bdev NVMe1n1 00:23:52.002 [2024-05-15 00:39:16.412326] bdev_nvme.c:4297:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:52.002 Running I/O for 1 seconds... 00:23:52.002 00:23:52.002 Latency(us) 00:23:52.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.002 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:52.002 NVMe0n1 : 1.00 25374.07 99.12 0.00 0.00 5033.35 1810.86 10347.79 00:23:52.002 =================================================================================================================== 00:23:52.002 Total : 25374.07 99.12 0.00 0.00 5033.35 1810.86 10347.79 00:23:52.002 Received shutdown signal, test time was about 1.000000 seconds 00:23:52.002 00:23:52.002 Latency(us) 00:23:52.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.002 =================================================================================================================== 00:23:52.002 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:52.002 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:52.002 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1615 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:52.002 00:39:17 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1609 -- # read -r file 00:23:52.002 00:39:17 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:52.002 00:39:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:52.002 00:39:17 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:52.002 rmmod nvme_tcp 00:23:52.002 rmmod nvme_fabrics 00:23:52.002 rmmod nvme_keyring 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2080140 ']' 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2080140 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@947 -- # '[' -z 2080140 ']' 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@951 -- # kill -0 2080140 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # uname 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2080140 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2080140' 00:23:52.002 killing process with pid 2080140 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # kill 2080140 00:23:52.002 [2024-05-15 00:39:18.126923] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:23:52.002 00:39:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@971 -- # wait 2080140 00:23:52.570 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:52.570 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:52.570 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:52.570 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:52.570 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:52.570 00:39:18 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.570 00:39:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:52.570 00:39:18 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.119 00:39:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:55.119 00:23:55.119 real 0m12.355s 00:23:55.119 user 0m17.723s 00:23:55.119 sys 0m4.876s 00:23:55.119 00:39:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # xtrace_disable 00:23:55.119 00:39:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:55.119 ************************************ 00:23:55.119 END TEST nvmf_multicontroller 00:23:55.119 ************************************ 00:23:55.119 00:39:20 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:55.119 00:39:20 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:23:55.119 00:39:20 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:23:55.119 00:39:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:55.119 ************************************ 00:23:55.119 START TEST nvmf_aer 00:23:55.119 ************************************ 00:23:55.119 00:39:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:55.120 * Looking for test storage... 00:23:55.120 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:55.120 00:39:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:00.388 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:00.388 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.388 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:00.389 Found net devices under 0000:27:00.0: cvl_0_0 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:00.389 Found net devices under 0000:27:00.1: cvl_0_1 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:00.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:00.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:24:00.389 00:24:00.389 --- 10.0.0.2 ping statistics --- 00:24:00.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.389 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:00.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:00.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:24:00.389 00:24:00.389 --- 10.0.0.1 ping statistics --- 00:24:00.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:00.389 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2084948 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2084948 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@828 -- # '[' -z 2084948 ']' 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:00.389 00:39:26 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:00.389 [2024-05-15 00:39:26.321598] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:24:00.389 [2024-05-15 00:39:26.321700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:00.389 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.389 [2024-05-15 00:39:26.450214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:00.647 [2024-05-15 00:39:26.552326] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:00.647 [2024-05-15 00:39:26.552363] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:00.647 [2024-05-15 00:39:26.552373] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:00.647 [2024-05-15 00:39:26.552383] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:00.647 [2024-05-15 00:39:26.552391] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:00.647 [2024-05-15 00:39:26.552502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:00.647 [2024-05-15 00:39:26.552592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:00.647 [2024-05-15 00:39:26.552674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.647 [2024-05-15 00:39:26.552683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:00.904 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:00.904 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@861 -- # return 0 00:24:00.904 00:39:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.904 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:00.904 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.162 [2024-05-15 00:39:27.084437] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.162 Malloc0 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.162 [2024-05-15 00:39:27.150331] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:01.162 [2024-05-15 00:39:27.150639] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.162 [ 00:24:01.162 { 00:24:01.162 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:01.162 "subtype": "Discovery", 00:24:01.162 "listen_addresses": [], 00:24:01.162 "allow_any_host": true, 00:24:01.162 "hosts": [] 00:24:01.162 }, 00:24:01.162 { 00:24:01.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.162 "subtype": "NVMe", 00:24:01.162 "listen_addresses": [ 00:24:01.162 { 00:24:01.162 "trtype": "TCP", 00:24:01.162 "adrfam": "IPv4", 00:24:01.162 "traddr": "10.0.0.2", 00:24:01.162 "trsvcid": "4420" 00:24:01.162 } 00:24:01.162 ], 00:24:01.162 "allow_any_host": true, 00:24:01.162 "hosts": [], 00:24:01.162 "serial_number": "SPDK00000000000001", 00:24:01.162 "model_number": "SPDK bdev Controller", 00:24:01.162 "max_namespaces": 2, 00:24:01.162 "min_cntlid": 1, 00:24:01.162 "max_cntlid": 65519, 00:24:01.162 "namespaces": [ 00:24:01.162 { 00:24:01.162 "nsid": 1, 00:24:01.162 "bdev_name": "Malloc0", 00:24:01.162 "name": "Malloc0", 00:24:01.162 "nguid": "2F08F75D325645168DDE94F3B688A072", 00:24:01.162 "uuid": "2f08f75d-3256-4516-8dde-94f3b688a072" 00:24:01.162 } 00:24:01.162 ] 00:24:01.162 } 00:24:01.162 ] 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2085263 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1262 -- # local i=0 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 0 -lt 200 ']' 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=1 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 1 -lt 200 ']' 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=2 00:24:01.162 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:24:01.162 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1264 -- # '[' 2 -lt 200 ']' 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # i=3 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # sleep 0.1 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1263 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1273 -- # return 0 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.420 Malloc1 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.420 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.420 [ 00:24:01.420 { 00:24:01.420 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:01.420 "subtype": "Discovery", 00:24:01.420 "listen_addresses": [], 00:24:01.420 "allow_any_host": true, 00:24:01.420 "hosts": [] 00:24:01.420 }, 00:24:01.420 { 00:24:01.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:01.420 "subtype": "NVMe", 00:24:01.420 "listen_addresses": [ 00:24:01.420 { 00:24:01.420 "trtype": "TCP", 00:24:01.420 "adrfam": "IPv4", 00:24:01.420 "traddr": "10.0.0.2", 00:24:01.420 "trsvcid": "4420" 00:24:01.420 } 00:24:01.420 ], 00:24:01.420 "allow_any_host": true, 00:24:01.421 "hosts": [], 00:24:01.421 "serial_number": "SPDK00000000000001", 00:24:01.421 "model_number": "SPDK bdev Controller", 00:24:01.421 "max_namespaces": 2, 00:24:01.421 "min_cntlid": 1, 00:24:01.421 "max_cntlid": 65519, 00:24:01.421 "namespaces": [ 00:24:01.421 { 00:24:01.421 "nsid": 1, 00:24:01.421 "bdev_name": "Malloc0", 00:24:01.421 "name": "Malloc0", 00:24:01.421 "nguid": "2F08F75D325645168DDE94F3B688A072", 00:24:01.421 "uuid": "2f08f75d-3256-4516-8dde-94f3b688a072" 00:24:01.421 }, 00:24:01.421 { 00:24:01.421 "nsid": 2, 00:24:01.421 "bdev_name": "Malloc1", 00:24:01.421 "name": "Malloc1", 00:24:01.421 "nguid": "D8CBABFC1F6E44BAA57E30480C25657F", 00:24:01.421 "uuid": "d8cbabfc-1f6e-44ba-a57e-30480c25657f" 00:24:01.421 } 00:24:01.421 ] 00:24:01.421 } 00:24:01.421 ] 00:24:01.421 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.421 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2085263 00:24:01.680 Asynchronous Event Request test 00:24:01.680 Attaching to 10.0.0.2 00:24:01.680 Attached to 10.0.0.2 00:24:01.680 Registering asynchronous event callbacks... 00:24:01.680 Starting namespace attribute notice tests for all controllers... 00:24:01.680 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:01.680 aer_cb - Changed Namespace 00:24:01.680 Cleaning up... 00:24:01.680 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:01.680 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.680 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.680 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.680 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:01.680 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.680 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.680 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.680 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.681 rmmod nvme_tcp 00:24:01.681 rmmod nvme_fabrics 00:24:01.681 rmmod nvme_keyring 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2084948 ']' 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2084948 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@947 -- # '[' -z 2084948 ']' 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@951 -- # kill -0 2084948 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # uname 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:01.681 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2084948 00:24:01.939 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:01.939 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:01.939 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2084948' 00:24:01.939 killing process with pid 2084948 00:24:01.939 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # kill 2084948 00:24:01.939 [2024-05-15 00:39:27.858671] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:01.939 00:39:27 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@971 -- # wait 2084948 00:24:02.506 00:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:02.506 00:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:02.506 00:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:02.506 00:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.506 00:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:02.506 00:39:28 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.506 00:39:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.506 00:39:28 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.409 00:39:30 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:04.409 00:24:04.409 real 0m9.620s 00:24:04.409 user 0m8.232s 00:24:04.409 sys 0m4.444s 00:24:04.409 00:39:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:04.409 00:39:30 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:04.409 ************************************ 00:24:04.409 END TEST nvmf_aer 00:24:04.409 ************************************ 00:24:04.409 00:39:30 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:04.409 00:39:30 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:04.409 00:39:30 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:04.409 00:39:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.409 ************************************ 00:24:04.409 START TEST nvmf_async_init 00:24:04.409 ************************************ 00:24:04.409 00:39:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:04.667 * Looking for test storage... 00:24:04.667 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:24:04.667 00:39:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.667 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:04.667 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.667 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.667 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.667 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d11ade245926466aa73ee02a291228f5 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.668 00:39:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:09.935 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:09.935 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.935 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:09.936 Found net devices under 0000:27:00.0: cvl_0_0 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:09.936 Found net devices under 0000:27:00.1: cvl_0_1 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.936 00:39:35 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.936 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.936 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.936 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:09.936 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.936 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:10.193 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:10.193 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:24:10.193 00:24:10.193 --- 10.0.0.2 ping statistics --- 00:24:10.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.193 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:10.193 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:10.193 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:24:10.193 00:24:10.193 --- 10.0.0.1 ping statistics --- 00:24:10.193 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:10.193 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2089211 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2089211 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@828 -- # '[' -z 2089211 ']' 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:10.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:10.193 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:10.193 [2024-05-15 00:39:36.223861] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:24:10.193 [2024-05-15 00:39:36.223961] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:10.193 EAL: No free 2048 kB hugepages reported on node 1 00:24:10.193 [2024-05-15 00:39:36.343896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.452 [2024-05-15 00:39:36.441166] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:10.452 [2024-05-15 00:39:36.441204] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:10.452 [2024-05-15 00:39:36.441213] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:10.452 [2024-05-15 00:39:36.441222] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:10.452 [2024-05-15 00:39:36.441231] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:10.452 [2024-05-15 00:39:36.441262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@861 -- # return 0 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.021 [2024-05-15 00:39:36.947355] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.021 null0 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d11ade245926466aa73ee02a291228f5 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.021 [2024-05-15 00:39:36.991299] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:11.021 [2024-05-15 00:39:36.991606] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.021 00:39:36 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.279 nvme0n1 00:24:11.279 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.279 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:11.279 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.279 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.279 [ 00:24:11.279 { 00:24:11.279 "name": "nvme0n1", 00:24:11.279 "aliases": [ 00:24:11.279 "d11ade24-5926-466a-a73e-e02a291228f5" 00:24:11.279 ], 00:24:11.279 "product_name": "NVMe disk", 00:24:11.279 "block_size": 512, 00:24:11.279 "num_blocks": 2097152, 00:24:11.279 "uuid": "d11ade24-5926-466a-a73e-e02a291228f5", 00:24:11.279 "assigned_rate_limits": { 00:24:11.279 "rw_ios_per_sec": 0, 00:24:11.279 "rw_mbytes_per_sec": 0, 00:24:11.279 "r_mbytes_per_sec": 0, 00:24:11.279 "w_mbytes_per_sec": 0 00:24:11.279 }, 00:24:11.279 "claimed": false, 00:24:11.279 "zoned": false, 00:24:11.279 "supported_io_types": { 00:24:11.279 "read": true, 00:24:11.279 "write": true, 00:24:11.279 "unmap": false, 00:24:11.279 "write_zeroes": true, 00:24:11.279 "flush": true, 00:24:11.279 "reset": true, 00:24:11.279 "compare": true, 00:24:11.279 "compare_and_write": true, 00:24:11.279 "abort": true, 00:24:11.279 "nvme_admin": true, 00:24:11.279 "nvme_io": true 00:24:11.279 }, 00:24:11.279 "memory_domains": [ 00:24:11.279 { 00:24:11.279 "dma_device_id": "system", 00:24:11.279 "dma_device_type": 1 00:24:11.279 } 00:24:11.279 ], 00:24:11.279 "driver_specific": { 00:24:11.279 "nvme": [ 00:24:11.279 { 00:24:11.279 "trid": { 00:24:11.279 "trtype": "TCP", 00:24:11.279 "adrfam": "IPv4", 00:24:11.279 "traddr": "10.0.0.2", 00:24:11.279 "trsvcid": "4420", 00:24:11.279 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:11.280 }, 00:24:11.280 "ctrlr_data": { 00:24:11.280 "cntlid": 1, 00:24:11.280 "vendor_id": "0x8086", 00:24:11.280 "model_number": "SPDK bdev Controller", 00:24:11.280 "serial_number": "00000000000000000000", 00:24:11.280 "firmware_revision": "24.05", 00:24:11.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:11.280 "oacs": { 00:24:11.280 "security": 0, 00:24:11.280 "format": 0, 00:24:11.280 "firmware": 0, 00:24:11.280 "ns_manage": 0 00:24:11.280 }, 00:24:11.280 "multi_ctrlr": true, 00:24:11.280 "ana_reporting": false 00:24:11.280 }, 00:24:11.280 "vs": { 00:24:11.280 "nvme_version": "1.3" 00:24:11.280 }, 00:24:11.280 "ns_data": { 00:24:11.280 "id": 1, 00:24:11.280 "can_share": true 00:24:11.280 } 00:24:11.280 } 00:24:11.280 ], 00:24:11.280 "mp_policy": "active_passive" 00:24:11.280 } 00:24:11.280 } 00:24:11.280 ] 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.280 [2024-05-15 00:39:37.241104] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:11.280 [2024-05-15 00:39:37.241199] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1180 (9): Bad file descriptor 00:24:11.280 [2024-05-15 00:39:37.372670] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.280 [ 00:24:11.280 { 00:24:11.280 "name": "nvme0n1", 00:24:11.280 "aliases": [ 00:24:11.280 "d11ade24-5926-466a-a73e-e02a291228f5" 00:24:11.280 ], 00:24:11.280 "product_name": "NVMe disk", 00:24:11.280 "block_size": 512, 00:24:11.280 "num_blocks": 2097152, 00:24:11.280 "uuid": "d11ade24-5926-466a-a73e-e02a291228f5", 00:24:11.280 "assigned_rate_limits": { 00:24:11.280 "rw_ios_per_sec": 0, 00:24:11.280 "rw_mbytes_per_sec": 0, 00:24:11.280 "r_mbytes_per_sec": 0, 00:24:11.280 "w_mbytes_per_sec": 0 00:24:11.280 }, 00:24:11.280 "claimed": false, 00:24:11.280 "zoned": false, 00:24:11.280 "supported_io_types": { 00:24:11.280 "read": true, 00:24:11.280 "write": true, 00:24:11.280 "unmap": false, 00:24:11.280 "write_zeroes": true, 00:24:11.280 "flush": true, 00:24:11.280 "reset": true, 00:24:11.280 "compare": true, 00:24:11.280 "compare_and_write": true, 00:24:11.280 "abort": true, 00:24:11.280 "nvme_admin": true, 00:24:11.280 "nvme_io": true 00:24:11.280 }, 00:24:11.280 "memory_domains": [ 00:24:11.280 { 00:24:11.280 "dma_device_id": "system", 00:24:11.280 "dma_device_type": 1 00:24:11.280 } 00:24:11.280 ], 00:24:11.280 "driver_specific": { 00:24:11.280 "nvme": [ 00:24:11.280 { 00:24:11.280 "trid": { 00:24:11.280 "trtype": "TCP", 00:24:11.280 "adrfam": "IPv4", 00:24:11.280 "traddr": "10.0.0.2", 00:24:11.280 "trsvcid": "4420", 00:24:11.280 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:11.280 }, 00:24:11.280 "ctrlr_data": { 00:24:11.280 "cntlid": 2, 00:24:11.280 "vendor_id": "0x8086", 00:24:11.280 "model_number": "SPDK bdev Controller", 00:24:11.280 "serial_number": "00000000000000000000", 00:24:11.280 "firmware_revision": "24.05", 00:24:11.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:11.280 "oacs": { 00:24:11.280 "security": 0, 00:24:11.280 "format": 0, 00:24:11.280 "firmware": 0, 00:24:11.280 "ns_manage": 0 00:24:11.280 }, 00:24:11.280 "multi_ctrlr": true, 00:24:11.280 "ana_reporting": false 00:24:11.280 }, 00:24:11.280 "vs": { 00:24:11.280 "nvme_version": "1.3" 00:24:11.280 }, 00:24:11.280 "ns_data": { 00:24:11.280 "id": 1, 00:24:11.280 "can_share": true 00:24:11.280 } 00:24:11.280 } 00:24:11.280 ], 00:24:11.280 "mp_policy": "active_passive" 00:24:11.280 } 00:24:11.280 } 00:24:11.280 ] 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.hEkrPMhhIn 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.hEkrPMhhIn 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.280 [2024-05-15 00:39:37.417230] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:11.280 [2024-05-15 00:39:37.417371] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hEkrPMhhIn 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.280 [2024-05-15 00:39:37.425225] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.hEkrPMhhIn 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.280 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.280 [2024-05-15 00:39:37.433241] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:11.280 [2024-05-15 00:39:37.433319] nvme_tcp.c:2580:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:11.539 nvme0n1 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.539 [ 00:24:11.539 { 00:24:11.539 "name": "nvme0n1", 00:24:11.539 "aliases": [ 00:24:11.539 "d11ade24-5926-466a-a73e-e02a291228f5" 00:24:11.539 ], 00:24:11.539 "product_name": "NVMe disk", 00:24:11.539 "block_size": 512, 00:24:11.539 "num_blocks": 2097152, 00:24:11.539 "uuid": "d11ade24-5926-466a-a73e-e02a291228f5", 00:24:11.539 "assigned_rate_limits": { 00:24:11.539 "rw_ios_per_sec": 0, 00:24:11.539 "rw_mbytes_per_sec": 0, 00:24:11.539 "r_mbytes_per_sec": 0, 00:24:11.539 "w_mbytes_per_sec": 0 00:24:11.539 }, 00:24:11.539 "claimed": false, 00:24:11.539 "zoned": false, 00:24:11.539 "supported_io_types": { 00:24:11.539 "read": true, 00:24:11.539 "write": true, 00:24:11.539 "unmap": false, 00:24:11.539 "write_zeroes": true, 00:24:11.539 "flush": true, 00:24:11.539 "reset": true, 00:24:11.539 "compare": true, 00:24:11.539 "compare_and_write": true, 00:24:11.539 "abort": true, 00:24:11.539 "nvme_admin": true, 00:24:11.539 "nvme_io": true 00:24:11.539 }, 00:24:11.539 "memory_domains": [ 00:24:11.539 { 00:24:11.539 "dma_device_id": "system", 00:24:11.539 "dma_device_type": 1 00:24:11.539 } 00:24:11.539 ], 00:24:11.539 "driver_specific": { 00:24:11.539 "nvme": [ 00:24:11.539 { 00:24:11.539 "trid": { 00:24:11.539 "trtype": "TCP", 00:24:11.539 "adrfam": "IPv4", 00:24:11.539 "traddr": "10.0.0.2", 00:24:11.539 "trsvcid": "4421", 00:24:11.539 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:11.539 }, 00:24:11.539 "ctrlr_data": { 00:24:11.539 "cntlid": 3, 00:24:11.539 "vendor_id": "0x8086", 00:24:11.539 "model_number": "SPDK bdev Controller", 00:24:11.539 "serial_number": "00000000000000000000", 00:24:11.539 "firmware_revision": "24.05", 00:24:11.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:11.539 "oacs": { 00:24:11.539 "security": 0, 00:24:11.539 "format": 0, 00:24:11.539 "firmware": 0, 00:24:11.539 "ns_manage": 0 00:24:11.539 }, 00:24:11.539 "multi_ctrlr": true, 00:24:11.539 "ana_reporting": false 00:24:11.539 }, 00:24:11.539 "vs": { 00:24:11.539 "nvme_version": "1.3" 00:24:11.539 }, 00:24:11.539 "ns_data": { 00:24:11.539 "id": 1, 00:24:11.539 "can_share": true 00:24:11.539 } 00:24:11.539 } 00:24:11.539 ], 00:24:11.539 "mp_policy": "active_passive" 00:24:11.539 } 00:24:11.539 } 00:24:11.539 ] 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.hEkrPMhhIn 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:11.539 rmmod nvme_tcp 00:24:11.539 rmmod nvme_fabrics 00:24:11.539 rmmod nvme_keyring 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:11.539 00:39:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:11.540 00:39:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2089211 ']' 00:24:11.540 00:39:37 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2089211 00:24:11.540 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@947 -- # '[' -z 2089211 ']' 00:24:11.540 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@951 -- # kill -0 2089211 00:24:11.540 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # uname 00:24:11.540 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:11.540 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2089211 00:24:11.540 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:11.540 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:11.540 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2089211' 00:24:11.540 killing process with pid 2089211 00:24:11.540 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # kill 2089211 00:24:11.540 [2024-05-15 00:39:37.623566] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:11.540 [2024-05-15 00:39:37.623601] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:11.540 [2024-05-15 00:39:37.623610] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:11.540 00:39:37 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@971 -- # wait 2089211 00:24:12.104 00:39:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.104 00:39:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.104 00:39:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.104 00:39:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.104 00:39:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.104 00:39:38 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.104 00:39:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.104 00:39:38 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.006 00:39:40 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:14.006 00:24:14.006 real 0m9.633s 00:24:14.006 user 0m3.411s 00:24:14.006 sys 0m4.514s 00:24:14.006 00:39:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:14.006 00:39:40 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:14.006 ************************************ 00:24:14.006 END TEST nvmf_async_init 00:24:14.006 ************************************ 00:24:14.264 00:39:40 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:14.264 00:39:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:14.264 00:39:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:14.264 00:39:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.264 ************************************ 00:24:14.264 START TEST dma 00:24:14.264 ************************************ 00:24:14.264 00:39:40 nvmf_tcp.dma -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:14.264 * Looking for test storage... 00:24:14.264 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:24:14.264 00:39:40 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:14.264 00:39:40 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.264 00:39:40 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.264 00:39:40 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.264 00:39:40 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.264 00:39:40 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.264 00:39:40 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.264 00:39:40 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:14.264 00:39:40 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.264 00:39:40 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.264 00:39:40 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:14.264 00:39:40 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:14.264 00:24:14.264 real 0m0.089s 00:24:14.264 user 0m0.029s 00:24:14.264 sys 0m0.067s 00:24:14.264 00:39:40 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:14.264 00:39:40 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:14.264 ************************************ 00:24:14.264 END TEST dma 00:24:14.264 ************************************ 00:24:14.264 00:39:40 nvmf_tcp -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:14.264 00:39:40 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:14.264 00:39:40 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:14.264 00:39:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.264 ************************************ 00:24:14.264 START TEST nvmf_identify 00:24:14.264 ************************************ 00:24:14.264 00:39:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:14.264 * Looking for test storage... 00:24:14.264 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:24:14.264 00:39:40 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.264 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.524 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.525 00:39:40 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:19.795 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:19.795 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:19.795 Found net devices under 0000:27:00.0: cvl_0_0 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:19.795 Found net devices under 0000:27:00.1: cvl_0_1 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.795 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:19.796 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.796 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:24:19.796 00:24:19.796 --- 10.0.0.2 ping statistics --- 00:24:19.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.796 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.796 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.796 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.053 ms 00:24:19.796 00:24:19.796 --- 10.0.0.1 ping statistics --- 00:24:19.796 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.796 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2093467 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2093467 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@828 -- # '[' -z 2093467 ']' 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:19.796 00:39:45 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:19.796 [2024-05-15 00:39:45.600843] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:24:19.796 [2024-05-15 00:39:45.600950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.796 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.796 [2024-05-15 00:39:45.728899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.796 [2024-05-15 00:39:45.827236] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.796 [2024-05-15 00:39:45.827273] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.796 [2024-05-15 00:39:45.827283] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.796 [2024-05-15 00:39:45.827292] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.796 [2024-05-15 00:39:45.827299] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.796 [2024-05-15 00:39:45.827373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.796 [2024-05-15 00:39:45.827479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.796 [2024-05-15 00:39:45.827582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.796 [2024-05-15 00:39:45.827589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@861 -- # return 0 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.361 [2024-05-15 00:39:46.300315] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.361 Malloc0 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.361 [2024-05-15 00:39:46.397828] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:20.361 [2024-05-15 00:39:46.398117] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.361 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.361 [ 00:24:20.361 { 00:24:20.361 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:20.361 "subtype": "Discovery", 00:24:20.361 "listen_addresses": [ 00:24:20.361 { 00:24:20.361 "trtype": "TCP", 00:24:20.361 "adrfam": "IPv4", 00:24:20.361 "traddr": "10.0.0.2", 00:24:20.361 "trsvcid": "4420" 00:24:20.361 } 00:24:20.361 ], 00:24:20.361 "allow_any_host": true, 00:24:20.361 "hosts": [] 00:24:20.362 }, 00:24:20.362 { 00:24:20.362 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.362 "subtype": "NVMe", 00:24:20.362 "listen_addresses": [ 00:24:20.362 { 00:24:20.362 "trtype": "TCP", 00:24:20.362 "adrfam": "IPv4", 00:24:20.362 "traddr": "10.0.0.2", 00:24:20.362 "trsvcid": "4420" 00:24:20.362 } 00:24:20.362 ], 00:24:20.362 "allow_any_host": true, 00:24:20.362 "hosts": [], 00:24:20.362 "serial_number": "SPDK00000000000001", 00:24:20.362 "model_number": "SPDK bdev Controller", 00:24:20.362 "max_namespaces": 32, 00:24:20.362 "min_cntlid": 1, 00:24:20.362 "max_cntlid": 65519, 00:24:20.362 "namespaces": [ 00:24:20.362 { 00:24:20.362 "nsid": 1, 00:24:20.362 "bdev_name": "Malloc0", 00:24:20.362 "name": "Malloc0", 00:24:20.362 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:20.362 "eui64": "ABCDEF0123456789", 00:24:20.362 "uuid": "2e143035-ca2c-4110-9d3e-687456871066" 00:24:20.362 } 00:24:20.362 ] 00:24:20.362 } 00:24:20.362 ] 00:24:20.362 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.362 00:39:46 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:20.362 [2024-05-15 00:39:46.459443] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:24:20.362 [2024-05-15 00:39:46.459527] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093718 ] 00:24:20.362 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.362 [2024-05-15 00:39:46.511566] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:20.362 [2024-05-15 00:39:46.511644] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:20.362 [2024-05-15 00:39:46.511652] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:20.362 [2024-05-15 00:39:46.511671] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:20.362 [2024-05-15 00:39:46.511683] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:20.362 [2024-05-15 00:39:46.511995] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:20.362 [2024-05-15 00:39:46.512034] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000024980 0 00:24:20.362 [2024-05-15 00:39:46.518561] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:20.362 [2024-05-15 00:39:46.518580] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:20.362 [2024-05-15 00:39:46.518587] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:20.362 [2024-05-15 00:39:46.518593] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:20.362 [2024-05-15 00:39:46.518638] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.362 [2024-05-15 00:39:46.518647] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.362 [2024-05-15 00:39:46.518654] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.362 [2024-05-15 00:39:46.518677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:20.362 [2024-05-15 00:39:46.518698] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.625 [2024-05-15 00:39:46.526569] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.625 [2024-05-15 00:39:46.526585] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.625 [2024-05-15 00:39:46.526590] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.625 [2024-05-15 00:39:46.526598] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.625 [2024-05-15 00:39:46.526613] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:20.625 [2024-05-15 00:39:46.526626] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:20.625 [2024-05-15 00:39:46.526634] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:20.625 [2024-05-15 00:39:46.526650] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.625 [2024-05-15 00:39:46.526662] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.625 [2024-05-15 00:39:46.526670] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.625 [2024-05-15 00:39:46.526685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.625 [2024-05-15 00:39:46.526703] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.625 [2024-05-15 00:39:46.526842] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.625 [2024-05-15 00:39:46.526854] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.625 [2024-05-15 00:39:46.526864] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.625 [2024-05-15 00:39:46.526869] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.625 [2024-05-15 00:39:46.526879] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:20.625 [2024-05-15 00:39:46.526887] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:20.625 [2024-05-15 00:39:46.526895] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.625 [2024-05-15 00:39:46.526902] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.625 [2024-05-15 00:39:46.526907] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.625 [2024-05-15 00:39:46.526920] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.625 [2024-05-15 00:39:46.526931] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.625 [2024-05-15 00:39:46.527026] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.625 [2024-05-15 00:39:46.527035] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.625 [2024-05-15 00:39:46.527039] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.625 [2024-05-15 00:39:46.527043] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.625 [2024-05-15 00:39:46.527050] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:20.625 [2024-05-15 00:39:46.527059] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:20.625 [2024-05-15 00:39:46.527067] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.625 [2024-05-15 00:39:46.527072] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.625 [2024-05-15 00:39:46.527078] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.625 [2024-05-15 00:39:46.527089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.625 [2024-05-15 00:39:46.527099] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.625 [2024-05-15 00:39:46.527184] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.625 [2024-05-15 00:39:46.527192] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.625 [2024-05-15 00:39:46.527197] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.527201] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.626 [2024-05-15 00:39:46.527208] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:20.626 [2024-05-15 00:39:46.527218] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.527223] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.527229] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.626 [2024-05-15 00:39:46.527240] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-05-15 00:39:46.527251] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.626 [2024-05-15 00:39:46.527345] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.626 [2024-05-15 00:39:46.527352] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.626 [2024-05-15 00:39:46.527356] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.527361] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.626 [2024-05-15 00:39:46.527367] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:20.626 [2024-05-15 00:39:46.527374] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:20.626 [2024-05-15 00:39:46.527382] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:20.626 [2024-05-15 00:39:46.527490] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:20.626 [2024-05-15 00:39:46.527498] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:20.626 [2024-05-15 00:39:46.527512] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.527517] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.527523] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.626 [2024-05-15 00:39:46.527531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-05-15 00:39:46.527544] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.626 [2024-05-15 00:39:46.527634] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.626 [2024-05-15 00:39:46.527642] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.626 [2024-05-15 00:39:46.527646] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.527650] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.626 [2024-05-15 00:39:46.527656] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:20.626 [2024-05-15 00:39:46.527666] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.527673] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.527678] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.626 [2024-05-15 00:39:46.527687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-05-15 00:39:46.527698] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.626 [2024-05-15 00:39:46.527792] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.626 [2024-05-15 00:39:46.527799] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.626 [2024-05-15 00:39:46.527803] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.527807] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.626 [2024-05-15 00:39:46.527813] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:20.626 [2024-05-15 00:39:46.527820] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:20.626 [2024-05-15 00:39:46.527829] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:20.626 [2024-05-15 00:39:46.527840] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:20.626 [2024-05-15 00:39:46.527854] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.527859] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.626 [2024-05-15 00:39:46.527868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-05-15 00:39:46.527880] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.626 [2024-05-15 00:39:46.528013] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.626 [2024-05-15 00:39:46.528023] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.626 [2024-05-15 00:39:46.528028] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528033] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=0 00:24:20.626 [2024-05-15 00:39:46.528041] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:24:20.626 [2024-05-15 00:39:46.528047] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528057] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528062] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528071] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.626 [2024-05-15 00:39:46.528077] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.626 [2024-05-15 00:39:46.528081] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528086] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.626 [2024-05-15 00:39:46.528098] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:20.626 [2024-05-15 00:39:46.528105] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:20.626 [2024-05-15 00:39:46.528111] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:20.626 [2024-05-15 00:39:46.528119] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:20.626 [2024-05-15 00:39:46.528126] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:20.626 [2024-05-15 00:39:46.528133] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:20.626 [2024-05-15 00:39:46.528143] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:20.626 [2024-05-15 00:39:46.528152] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528157] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528163] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.626 [2024-05-15 00:39:46.528172] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:20.626 [2024-05-15 00:39:46.528183] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.626 [2024-05-15 00:39:46.528279] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.626 [2024-05-15 00:39:46.528286] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.626 [2024-05-15 00:39:46.528294] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528299] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.626 [2024-05-15 00:39:46.528308] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528315] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528320] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.626 [2024-05-15 00:39:46.528330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.626 [2024-05-15 00:39:46.528337] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528342] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528346] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000024980) 00:24:20.626 [2024-05-15 00:39:46.528353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.626 [2024-05-15 00:39:46.528360] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528364] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528370] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000024980) 00:24:20.626 [2024-05-15 00:39:46.528377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.626 [2024-05-15 00:39:46.528383] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528388] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528392] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.626 [2024-05-15 00:39:46.528399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.626 [2024-05-15 00:39:46.528405] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:20.626 [2024-05-15 00:39:46.528414] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:20.626 [2024-05-15 00:39:46.528424] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.626 [2024-05-15 00:39:46.528429] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:24:20.626 [2024-05-15 00:39:46.528438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-05-15 00:39:46.528451] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.626 [2024-05-15 00:39:46.528459] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:24:20.626 [2024-05-15 00:39:46.528465] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:24:20.626 [2024-05-15 00:39:46.528470] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.626 [2024-05-15 00:39:46.528475] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:20.626 [2024-05-15 00:39:46.528636] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.627 [2024-05-15 00:39:46.528643] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.627 [2024-05-15 00:39:46.528647] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.528651] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:24:20.627 [2024-05-15 00:39:46.528658] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:20.627 [2024-05-15 00:39:46.528665] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:20.627 [2024-05-15 00:39:46.528680] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.528685] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:24:20.627 [2024-05-15 00:39:46.528699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.627 [2024-05-15 00:39:46.528710] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:20.627 [2024-05-15 00:39:46.528814] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.627 [2024-05-15 00:39:46.528823] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.627 [2024-05-15 00:39:46.528831] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.528836] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=4 00:24:20.627 [2024-05-15 00:39:46.528842] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:24:20.627 [2024-05-15 00:39:46.528849] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.528863] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.528868] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.574560] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.627 [2024-05-15 00:39:46.574575] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.627 [2024-05-15 00:39:46.574580] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.574586] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:24:20.627 [2024-05-15 00:39:46.574606] nvme_ctrlr.c:4037:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:20.627 [2024-05-15 00:39:46.574643] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.574649] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:24:20.627 [2024-05-15 00:39:46.574662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.627 [2024-05-15 00:39:46.574673] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.574678] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.574683] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:24:20.627 [2024-05-15 00:39:46.574691] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.627 [2024-05-15 00:39:46.574707] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:20.627 [2024-05-15 00:39:46.574716] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:24:20.627 [2024-05-15 00:39:46.574946] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.627 [2024-05-15 00:39:46.574953] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.627 [2024-05-15 00:39:46.574958] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.574963] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=1024, cccid=4 00:24:20.627 [2024-05-15 00:39:46.574970] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=1024 00:24:20.627 [2024-05-15 00:39:46.574975] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.574986] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.574994] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.575002] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.627 [2024-05-15 00:39:46.575009] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.627 [2024-05-15 00:39:46.575013] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.575018] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:24:20.627 [2024-05-15 00:39:46.616734] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.627 [2024-05-15 00:39:46.616749] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.627 [2024-05-15 00:39:46.616753] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.616759] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:24:20.627 [2024-05-15 00:39:46.616780] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.616785] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:24:20.627 [2024-05-15 00:39:46.616796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.627 [2024-05-15 00:39:46.616811] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:20.627 [2024-05-15 00:39:46.616943] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.627 [2024-05-15 00:39:46.616952] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.627 [2024-05-15 00:39:46.616956] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.616961] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=3072, cccid=4 00:24:20.627 [2024-05-15 00:39:46.616966] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=3072 00:24:20.627 [2024-05-15 00:39:46.616971] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.616979] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.616984] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.616992] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.627 [2024-05-15 00:39:46.616999] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.627 [2024-05-15 00:39:46.617002] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.617007] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:24:20.627 [2024-05-15 00:39:46.617018] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.617024] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:24:20.627 [2024-05-15 00:39:46.617033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.627 [2024-05-15 00:39:46.617049] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:20.627 [2024-05-15 00:39:46.617164] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.627 [2024-05-15 00:39:46.617171] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.627 [2024-05-15 00:39:46.617175] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.617179] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=8, cccid=4 00:24:20.627 [2024-05-15 00:39:46.617184] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=8 00:24:20.627 [2024-05-15 00:39:46.617189] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.617198] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.617202] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.662563] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.627 [2024-05-15 00:39:46.662583] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.627 [2024-05-15 00:39:46.662587] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.627 [2024-05-15 00:39:46.662592] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:24:20.627 ===================================================== 00:24:20.627 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:20.627 ===================================================== 00:24:20.627 Controller Capabilities/Features 00:24:20.627 ================================ 00:24:20.627 Vendor ID: 0000 00:24:20.627 Subsystem Vendor ID: 0000 00:24:20.627 Serial Number: .................... 00:24:20.627 Model Number: ........................................ 00:24:20.627 Firmware Version: 24.05 00:24:20.627 Recommended Arb Burst: 0 00:24:20.627 IEEE OUI Identifier: 00 00 00 00:24:20.627 Multi-path I/O 00:24:20.627 May have multiple subsystem ports: No 00:24:20.627 May have multiple controllers: No 00:24:20.627 Associated with SR-IOV VF: No 00:24:20.627 Max Data Transfer Size: 131072 00:24:20.627 Max Number of Namespaces: 0 00:24:20.627 Max Number of I/O Queues: 1024 00:24:20.627 NVMe Specification Version (VS): 1.3 00:24:20.627 NVMe Specification Version (Identify): 1.3 00:24:20.627 Maximum Queue Entries: 128 00:24:20.627 Contiguous Queues Required: Yes 00:24:20.627 Arbitration Mechanisms Supported 00:24:20.627 Weighted Round Robin: Not Supported 00:24:20.627 Vendor Specific: Not Supported 00:24:20.627 Reset Timeout: 15000 ms 00:24:20.627 Doorbell Stride: 4 bytes 00:24:20.627 NVM Subsystem Reset: Not Supported 00:24:20.627 Command Sets Supported 00:24:20.627 NVM Command Set: Supported 00:24:20.627 Boot Partition: Not Supported 00:24:20.627 Memory Page Size Minimum: 4096 bytes 00:24:20.627 Memory Page Size Maximum: 4096 bytes 00:24:20.627 Persistent Memory Region: Not Supported 00:24:20.627 Optional Asynchronous Events Supported 00:24:20.627 Namespace Attribute Notices: Not Supported 00:24:20.627 Firmware Activation Notices: Not Supported 00:24:20.627 ANA Change Notices: Not Supported 00:24:20.627 PLE Aggregate Log Change Notices: Not Supported 00:24:20.627 LBA Status Info Alert Notices: Not Supported 00:24:20.627 EGE Aggregate Log Change Notices: Not Supported 00:24:20.627 Normal NVM Subsystem Shutdown event: Not Supported 00:24:20.627 Zone Descriptor Change Notices: Not Supported 00:24:20.627 Discovery Log Change Notices: Supported 00:24:20.627 Controller Attributes 00:24:20.627 128-bit Host Identifier: Not Supported 00:24:20.628 Non-Operational Permissive Mode: Not Supported 00:24:20.628 NVM Sets: Not Supported 00:24:20.628 Read Recovery Levels: Not Supported 00:24:20.628 Endurance Groups: Not Supported 00:24:20.628 Predictable Latency Mode: Not Supported 00:24:20.628 Traffic Based Keep ALive: Not Supported 00:24:20.628 Namespace Granularity: Not Supported 00:24:20.628 SQ Associations: Not Supported 00:24:20.628 UUID List: Not Supported 00:24:20.628 Multi-Domain Subsystem: Not Supported 00:24:20.628 Fixed Capacity Management: Not Supported 00:24:20.628 Variable Capacity Management: Not Supported 00:24:20.628 Delete Endurance Group: Not Supported 00:24:20.628 Delete NVM Set: Not Supported 00:24:20.628 Extended LBA Formats Supported: Not Supported 00:24:20.628 Flexible Data Placement Supported: Not Supported 00:24:20.628 00:24:20.628 Controller Memory Buffer Support 00:24:20.628 ================================ 00:24:20.628 Supported: No 00:24:20.628 00:24:20.628 Persistent Memory Region Support 00:24:20.628 ================================ 00:24:20.628 Supported: No 00:24:20.628 00:24:20.628 Admin Command Set Attributes 00:24:20.628 ============================ 00:24:20.628 Security Send/Receive: Not Supported 00:24:20.628 Format NVM: Not Supported 00:24:20.628 Firmware Activate/Download: Not Supported 00:24:20.628 Namespace Management: Not Supported 00:24:20.628 Device Self-Test: Not Supported 00:24:20.628 Directives: Not Supported 00:24:20.628 NVMe-MI: Not Supported 00:24:20.628 Virtualization Management: Not Supported 00:24:20.628 Doorbell Buffer Config: Not Supported 00:24:20.628 Get LBA Status Capability: Not Supported 00:24:20.628 Command & Feature Lockdown Capability: Not Supported 00:24:20.628 Abort Command Limit: 1 00:24:20.628 Async Event Request Limit: 4 00:24:20.628 Number of Firmware Slots: N/A 00:24:20.628 Firmware Slot 1 Read-Only: N/A 00:24:20.628 Firmware Activation Without Reset: N/A 00:24:20.628 Multiple Update Detection Support: N/A 00:24:20.628 Firmware Update Granularity: No Information Provided 00:24:20.628 Per-Namespace SMART Log: No 00:24:20.628 Asymmetric Namespace Access Log Page: Not Supported 00:24:20.628 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:20.628 Command Effects Log Page: Not Supported 00:24:20.628 Get Log Page Extended Data: Supported 00:24:20.628 Telemetry Log Pages: Not Supported 00:24:20.628 Persistent Event Log Pages: Not Supported 00:24:20.628 Supported Log Pages Log Page: May Support 00:24:20.628 Commands Supported & Effects Log Page: Not Supported 00:24:20.628 Feature Identifiers & Effects Log Page:May Support 00:24:20.628 NVMe-MI Commands & Effects Log Page: May Support 00:24:20.628 Data Area 4 for Telemetry Log: Not Supported 00:24:20.628 Error Log Page Entries Supported: 128 00:24:20.628 Keep Alive: Not Supported 00:24:20.628 00:24:20.628 NVM Command Set Attributes 00:24:20.628 ========================== 00:24:20.628 Submission Queue Entry Size 00:24:20.628 Max: 1 00:24:20.628 Min: 1 00:24:20.628 Completion Queue Entry Size 00:24:20.628 Max: 1 00:24:20.628 Min: 1 00:24:20.628 Number of Namespaces: 0 00:24:20.628 Compare Command: Not Supported 00:24:20.628 Write Uncorrectable Command: Not Supported 00:24:20.628 Dataset Management Command: Not Supported 00:24:20.628 Write Zeroes Command: Not Supported 00:24:20.628 Set Features Save Field: Not Supported 00:24:20.628 Reservations: Not Supported 00:24:20.628 Timestamp: Not Supported 00:24:20.628 Copy: Not Supported 00:24:20.628 Volatile Write Cache: Not Present 00:24:20.628 Atomic Write Unit (Normal): 1 00:24:20.628 Atomic Write Unit (PFail): 1 00:24:20.628 Atomic Compare & Write Unit: 1 00:24:20.628 Fused Compare & Write: Supported 00:24:20.628 Scatter-Gather List 00:24:20.628 SGL Command Set: Supported 00:24:20.628 SGL Keyed: Supported 00:24:20.628 SGL Bit Bucket Descriptor: Not Supported 00:24:20.628 SGL Metadata Pointer: Not Supported 00:24:20.628 Oversized SGL: Not Supported 00:24:20.628 SGL Metadata Address: Not Supported 00:24:20.628 SGL Offset: Supported 00:24:20.628 Transport SGL Data Block: Not Supported 00:24:20.628 Replay Protected Memory Block: Not Supported 00:24:20.628 00:24:20.628 Firmware Slot Information 00:24:20.628 ========================= 00:24:20.628 Active slot: 0 00:24:20.628 00:24:20.628 00:24:20.628 Error Log 00:24:20.628 ========= 00:24:20.628 00:24:20.628 Active Namespaces 00:24:20.628 ================= 00:24:20.628 Discovery Log Page 00:24:20.628 ================== 00:24:20.628 Generation Counter: 2 00:24:20.628 Number of Records: 2 00:24:20.628 Record Format: 0 00:24:20.628 00:24:20.628 Discovery Log Entry 0 00:24:20.628 ---------------------- 00:24:20.628 Transport Type: 3 (TCP) 00:24:20.628 Address Family: 1 (IPv4) 00:24:20.628 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:20.628 Entry Flags: 00:24:20.628 Duplicate Returned Information: 1 00:24:20.628 Explicit Persistent Connection Support for Discovery: 1 00:24:20.628 Transport Requirements: 00:24:20.628 Secure Channel: Not Required 00:24:20.628 Port ID: 0 (0x0000) 00:24:20.628 Controller ID: 65535 (0xffff) 00:24:20.628 Admin Max SQ Size: 128 00:24:20.628 Transport Service Identifier: 4420 00:24:20.628 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:20.628 Transport Address: 10.0.0.2 00:24:20.628 Discovery Log Entry 1 00:24:20.628 ---------------------- 00:24:20.628 Transport Type: 3 (TCP) 00:24:20.628 Address Family: 1 (IPv4) 00:24:20.628 Subsystem Type: 2 (NVM Subsystem) 00:24:20.628 Entry Flags: 00:24:20.628 Duplicate Returned Information: 0 00:24:20.628 Explicit Persistent Connection Support for Discovery: 0 00:24:20.628 Transport Requirements: 00:24:20.628 Secure Channel: Not Required 00:24:20.628 Port ID: 0 (0x0000) 00:24:20.628 Controller ID: 65535 (0xffff) 00:24:20.628 Admin Max SQ Size: 128 00:24:20.628 Transport Service Identifier: 4420 00:24:20.628 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:20.628 Transport Address: 10.0.0.2 [2024-05-15 00:39:46.662710] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:20.628 [2024-05-15 00:39:46.662726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.628 [2024-05-15 00:39:46.662734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.628 [2024-05-15 00:39:46.662741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.628 [2024-05-15 00:39:46.662748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.628 [2024-05-15 00:39:46.662759] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.628 [2024-05-15 00:39:46.662765] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.628 [2024-05-15 00:39:46.662770] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.628 [2024-05-15 00:39:46.662781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.628 [2024-05-15 00:39:46.662799] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.628 [2024-05-15 00:39:46.662894] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.628 [2024-05-15 00:39:46.662902] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.628 [2024-05-15 00:39:46.662910] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.628 [2024-05-15 00:39:46.662915] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.628 [2024-05-15 00:39:46.662927] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.628 [2024-05-15 00:39:46.662932] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.628 [2024-05-15 00:39:46.662937] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.628 [2024-05-15 00:39:46.662948] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.628 [2024-05-15 00:39:46.662961] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.628 [2024-05-15 00:39:46.663070] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.628 [2024-05-15 00:39:46.663077] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.628 [2024-05-15 00:39:46.663081] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.628 [2024-05-15 00:39:46.663086] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.628 [2024-05-15 00:39:46.663092] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:20.628 [2024-05-15 00:39:46.663099] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:20.628 [2024-05-15 00:39:46.663110] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.628 [2024-05-15 00:39:46.663115] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.628 [2024-05-15 00:39:46.663121] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.628 [2024-05-15 00:39:46.663129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.628 [2024-05-15 00:39:46.663141] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.628 [2024-05-15 00:39:46.663236] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.629 [2024-05-15 00:39:46.663242] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.629 [2024-05-15 00:39:46.663246] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663251] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.629 [2024-05-15 00:39:46.663261] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663266] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663270] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.629 [2024-05-15 00:39:46.663278] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.629 [2024-05-15 00:39:46.663288] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.629 [2024-05-15 00:39:46.663374] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.629 [2024-05-15 00:39:46.663380] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.629 [2024-05-15 00:39:46.663384] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663389] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.629 [2024-05-15 00:39:46.663398] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663403] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663407] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.629 [2024-05-15 00:39:46.663415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.629 [2024-05-15 00:39:46.663425] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.629 [2024-05-15 00:39:46.663511] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.629 [2024-05-15 00:39:46.663518] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.629 [2024-05-15 00:39:46.663521] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663526] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.629 [2024-05-15 00:39:46.663535] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663539] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663544] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.629 [2024-05-15 00:39:46.663558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.629 [2024-05-15 00:39:46.663568] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.629 [2024-05-15 00:39:46.663652] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.629 [2024-05-15 00:39:46.663658] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.629 [2024-05-15 00:39:46.663662] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663667] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.629 [2024-05-15 00:39:46.663676] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663680] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663685] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.629 [2024-05-15 00:39:46.663693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.629 [2024-05-15 00:39:46.663704] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.629 [2024-05-15 00:39:46.663792] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.629 [2024-05-15 00:39:46.663799] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.629 [2024-05-15 00:39:46.663803] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663807] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.629 [2024-05-15 00:39:46.663817] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663821] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663826] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.629 [2024-05-15 00:39:46.663834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.629 [2024-05-15 00:39:46.663843] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.629 [2024-05-15 00:39:46.663925] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.629 [2024-05-15 00:39:46.663932] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.629 [2024-05-15 00:39:46.663936] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663941] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.629 [2024-05-15 00:39:46.663950] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663954] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.663959] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.629 [2024-05-15 00:39:46.663967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.629 [2024-05-15 00:39:46.663976] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.629 [2024-05-15 00:39:46.664068] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.629 [2024-05-15 00:39:46.664074] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.629 [2024-05-15 00:39:46.664078] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664083] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.629 [2024-05-15 00:39:46.664092] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664096] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664101] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.629 [2024-05-15 00:39:46.664110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.629 [2024-05-15 00:39:46.664120] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.629 [2024-05-15 00:39:46.664207] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.629 [2024-05-15 00:39:46.664213] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.629 [2024-05-15 00:39:46.664217] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664222] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.629 [2024-05-15 00:39:46.664231] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664236] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664240] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.629 [2024-05-15 00:39:46.664248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.629 [2024-05-15 00:39:46.664258] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.629 [2024-05-15 00:39:46.664354] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.629 [2024-05-15 00:39:46.664360] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.629 [2024-05-15 00:39:46.664364] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664369] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.629 [2024-05-15 00:39:46.664378] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664382] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664387] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.629 [2024-05-15 00:39:46.664395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.629 [2024-05-15 00:39:46.664404] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.629 [2024-05-15 00:39:46.664497] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.629 [2024-05-15 00:39:46.664504] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.629 [2024-05-15 00:39:46.664508] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664512] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.629 [2024-05-15 00:39:46.664521] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664526] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664530] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.629 [2024-05-15 00:39:46.664538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.629 [2024-05-15 00:39:46.664548] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.629 [2024-05-15 00:39:46.664626] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.629 [2024-05-15 00:39:46.664633] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.629 [2024-05-15 00:39:46.664637] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664641] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.629 [2024-05-15 00:39:46.664650] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664655] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664659] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.629 [2024-05-15 00:39:46.664667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.629 [2024-05-15 00:39:46.664677] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.629 [2024-05-15 00:39:46.664759] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.629 [2024-05-15 00:39:46.664765] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.629 [2024-05-15 00:39:46.664769] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664774] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.629 [2024-05-15 00:39:46.664783] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664788] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.629 [2024-05-15 00:39:46.664792] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.629 [2024-05-15 00:39:46.664800] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.629 [2024-05-15 00:39:46.664811] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.630 [2024-05-15 00:39:46.664891] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.630 [2024-05-15 00:39:46.664898] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.630 [2024-05-15 00:39:46.664902] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.664906] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.630 [2024-05-15 00:39:46.664915] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.664920] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.664924] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.630 [2024-05-15 00:39:46.664932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-05-15 00:39:46.664942] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.630 [2024-05-15 00:39:46.665024] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.630 [2024-05-15 00:39:46.665030] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.630 [2024-05-15 00:39:46.665034] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665038] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.630 [2024-05-15 00:39:46.665048] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665053] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665057] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.630 [2024-05-15 00:39:46.665065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-05-15 00:39:46.665075] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.630 [2024-05-15 00:39:46.665161] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.630 [2024-05-15 00:39:46.665167] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.630 [2024-05-15 00:39:46.665171] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665176] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.630 [2024-05-15 00:39:46.665185] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665189] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665194] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.630 [2024-05-15 00:39:46.665203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-05-15 00:39:46.665213] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.630 [2024-05-15 00:39:46.665298] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.630 [2024-05-15 00:39:46.665306] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.630 [2024-05-15 00:39:46.665310] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665314] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.630 [2024-05-15 00:39:46.665324] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665328] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665332] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.630 [2024-05-15 00:39:46.665341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-05-15 00:39:46.665352] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.630 [2024-05-15 00:39:46.665445] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.630 [2024-05-15 00:39:46.665451] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.630 [2024-05-15 00:39:46.665456] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665460] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.630 [2024-05-15 00:39:46.665469] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665474] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665478] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.630 [2024-05-15 00:39:46.665486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-05-15 00:39:46.665496] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.630 [2024-05-15 00:39:46.665580] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.630 [2024-05-15 00:39:46.665587] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.630 [2024-05-15 00:39:46.665593] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665597] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.630 [2024-05-15 00:39:46.665607] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665612] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665618] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.630 [2024-05-15 00:39:46.665626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-05-15 00:39:46.665637] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.630 [2024-05-15 00:39:46.665720] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.630 [2024-05-15 00:39:46.665727] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.630 [2024-05-15 00:39:46.665731] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665737] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.630 [2024-05-15 00:39:46.665748] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665752] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665756] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.630 [2024-05-15 00:39:46.665764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-05-15 00:39:46.665773] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.630 [2024-05-15 00:39:46.665862] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.630 [2024-05-15 00:39:46.665869] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.630 [2024-05-15 00:39:46.665873] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665878] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.630 [2024-05-15 00:39:46.665887] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665892] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.665896] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.630 [2024-05-15 00:39:46.665904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-05-15 00:39:46.665915] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.630 [2024-05-15 00:39:46.666010] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.630 [2024-05-15 00:39:46.666017] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.630 [2024-05-15 00:39:46.666020] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.666025] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.630 [2024-05-15 00:39:46.666034] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.666038] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.666043] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.630 [2024-05-15 00:39:46.666050] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.630 [2024-05-15 00:39:46.666060] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.630 [2024-05-15 00:39:46.666142] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.630 [2024-05-15 00:39:46.666148] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.630 [2024-05-15 00:39:46.666152] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.666156] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.630 [2024-05-15 00:39:46.666166] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.666170] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.630 [2024-05-15 00:39:46.666175] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.631 [2024-05-15 00:39:46.666182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-05-15 00:39:46.666192] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.631 [2024-05-15 00:39:46.666282] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.631 [2024-05-15 00:39:46.666288] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.631 [2024-05-15 00:39:46.666292] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.631 [2024-05-15 00:39:46.666296] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.631 [2024-05-15 00:39:46.666306] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.631 [2024-05-15 00:39:46.666310] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.631 [2024-05-15 00:39:46.666314] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.631 [2024-05-15 00:39:46.666325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-05-15 00:39:46.666335] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.631 [2024-05-15 00:39:46.666420] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.631 [2024-05-15 00:39:46.666426] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.631 [2024-05-15 00:39:46.666430] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.631 [2024-05-15 00:39:46.666434] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.631 [2024-05-15 00:39:46.666443] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.631 [2024-05-15 00:39:46.666448] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.631 [2024-05-15 00:39:46.666452] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.631 [2024-05-15 00:39:46.666460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-05-15 00:39:46.666471] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.631 [2024-05-15 00:39:46.670560] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.631 [2024-05-15 00:39:46.670568] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.631 [2024-05-15 00:39:46.670572] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.631 [2024-05-15 00:39:46.670577] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.631 [2024-05-15 00:39:46.670586] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.631 [2024-05-15 00:39:46.670591] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.631 [2024-05-15 00:39:46.670595] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.631 [2024-05-15 00:39:46.670603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.631 [2024-05-15 00:39:46.670614] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.631 [2024-05-15 00:39:46.670697] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.631 [2024-05-15 00:39:46.670704] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.631 [2024-05-15 00:39:46.670708] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.631 [2024-05-15 00:39:46.670712] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.631 [2024-05-15 00:39:46.670721] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:24:20.631 00:24:20.631 00:39:46 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:20.631 [2024-05-15 00:39:46.739981] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:24:20.631 [2024-05-15 00:39:46.740066] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2093722 ] 00:24:20.631 EAL: No free 2048 kB hugepages reported on node 1 00:24:20.894 [2024-05-15 00:39:46.792352] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:20.894 [2024-05-15 00:39:46.792423] nvme_tcp.c:2329:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:20.894 [2024-05-15 00:39:46.792431] nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:20.894 [2024-05-15 00:39:46.792449] nvme_tcp.c:2351:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:20.894 [2024-05-15 00:39:46.792461] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:20.894 [2024-05-15 00:39:46.792851] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:20.894 [2024-05-15 00:39:46.792881] nvme_tcp.c:1546:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x615000024980 0 00:24:20.894 [2024-05-15 00:39:46.808561] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:20.894 [2024-05-15 00:39:46.808579] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:20.894 [2024-05-15 00:39:46.808586] nvme_tcp.c:1592:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:20.894 [2024-05-15 00:39:46.808592] nvme_tcp.c:1593:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:20.894 [2024-05-15 00:39:46.808628] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.808641] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.808648] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.894 [2024-05-15 00:39:46.808670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:20.894 [2024-05-15 00:39:46.808693] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.894 [2024-05-15 00:39:46.816567] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.894 [2024-05-15 00:39:46.816580] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.894 [2024-05-15 00:39:46.816585] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.816592] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.894 [2024-05-15 00:39:46.816606] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:20.894 [2024-05-15 00:39:46.816618] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:20.894 [2024-05-15 00:39:46.816629] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:20.894 [2024-05-15 00:39:46.816644] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.816651] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.816657] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.894 [2024-05-15 00:39:46.816673] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.894 [2024-05-15 00:39:46.816692] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.894 [2024-05-15 00:39:46.816785] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.894 [2024-05-15 00:39:46.816793] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.894 [2024-05-15 00:39:46.816803] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.816808] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.894 [2024-05-15 00:39:46.816817] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:20.894 [2024-05-15 00:39:46.816827] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:20.894 [2024-05-15 00:39:46.816835] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.816842] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.816847] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.894 [2024-05-15 00:39:46.816857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.894 [2024-05-15 00:39:46.816869] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.894 [2024-05-15 00:39:46.816941] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.894 [2024-05-15 00:39:46.816949] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.894 [2024-05-15 00:39:46.816953] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.816957] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.894 [2024-05-15 00:39:46.816964] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:20.894 [2024-05-15 00:39:46.816973] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:20.894 [2024-05-15 00:39:46.816981] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.816990] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.816996] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.894 [2024-05-15 00:39:46.817005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.894 [2024-05-15 00:39:46.817017] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.894 [2024-05-15 00:39:46.817080] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.894 [2024-05-15 00:39:46.817087] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.894 [2024-05-15 00:39:46.817091] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.817095] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.894 [2024-05-15 00:39:46.817101] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:20.894 [2024-05-15 00:39:46.817111] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.817116] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.817121] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.894 [2024-05-15 00:39:46.817130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.894 [2024-05-15 00:39:46.817141] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.894 [2024-05-15 00:39:46.817209] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.894 [2024-05-15 00:39:46.817215] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.894 [2024-05-15 00:39:46.817220] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.817225] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.894 [2024-05-15 00:39:46.817231] nvme_ctrlr.c:3750:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:20.894 [2024-05-15 00:39:46.817237] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:20.894 [2024-05-15 00:39:46.817245] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:20.894 [2024-05-15 00:39:46.817352] nvme_ctrlr.c:3943:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:20.894 [2024-05-15 00:39:46.817361] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:20.894 [2024-05-15 00:39:46.817371] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.817377] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.817382] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.894 [2024-05-15 00:39:46.817391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.894 [2024-05-15 00:39:46.817402] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.894 [2024-05-15 00:39:46.817468] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.894 [2024-05-15 00:39:46.817475] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.894 [2024-05-15 00:39:46.817479] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.817484] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.894 [2024-05-15 00:39:46.817493] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:20.894 [2024-05-15 00:39:46.817504] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.817509] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.817514] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.894 [2024-05-15 00:39:46.817524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.894 [2024-05-15 00:39:46.817535] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.894 [2024-05-15 00:39:46.817605] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.894 [2024-05-15 00:39:46.817611] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.894 [2024-05-15 00:39:46.817615] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.817620] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.894 [2024-05-15 00:39:46.817626] nvme_ctrlr.c:3785:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:20.894 [2024-05-15 00:39:46.817632] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:20.894 [2024-05-15 00:39:46.817641] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:20.894 [2024-05-15 00:39:46.817651] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:20.894 [2024-05-15 00:39:46.817665] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.817670] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.894 [2024-05-15 00:39:46.817683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.894 [2024-05-15 00:39:46.817693] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.894 [2024-05-15 00:39:46.817797] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.894 [2024-05-15 00:39:46.817803] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.894 [2024-05-15 00:39:46.817807] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.894 [2024-05-15 00:39:46.817812] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=0 00:24:20.895 [2024-05-15 00:39:46.817821] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:24:20.895 [2024-05-15 00:39:46.817826] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.817840] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.817845] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.817867] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.895 [2024-05-15 00:39:46.817873] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.895 [2024-05-15 00:39:46.817877] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.817882] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.895 [2024-05-15 00:39:46.817894] nvme_ctrlr.c:1985:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:20.895 [2024-05-15 00:39:46.817902] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:20.895 [2024-05-15 00:39:46.817908] nvme_ctrlr.c:1992:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:20.895 [2024-05-15 00:39:46.817914] nvme_ctrlr.c:2016:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:20.895 [2024-05-15 00:39:46.817921] nvme_ctrlr.c:2031:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:20.895 [2024-05-15 00:39:46.817927] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:20.895 [2024-05-15 00:39:46.817938] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:20.895 [2024-05-15 00:39:46.817947] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.817954] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.817959] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.895 [2024-05-15 00:39:46.817970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:20.895 [2024-05-15 00:39:46.817981] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.895 [2024-05-15 00:39:46.818051] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.895 [2024-05-15 00:39:46.818057] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.895 [2024-05-15 00:39:46.818062] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818066] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x615000024980 00:24:20.895 [2024-05-15 00:39:46.818076] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818082] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818088] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x615000024980) 00:24:20.895 [2024-05-15 00:39:46.818097] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.895 [2024-05-15 00:39:46.818105] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818110] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818115] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x615000024980) 00:24:20.895 [2024-05-15 00:39:46.818122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.895 [2024-05-15 00:39:46.818128] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818133] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818138] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x615000024980) 00:24:20.895 [2024-05-15 00:39:46.818147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.895 [2024-05-15 00:39:46.818153] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818158] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818162] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.895 [2024-05-15 00:39:46.818170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.895 [2024-05-15 00:39:46.818175] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:20.895 [2024-05-15 00:39:46.818184] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:20.895 [2024-05-15 00:39:46.818192] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818197] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:24:20.895 [2024-05-15 00:39:46.818207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.895 [2024-05-15 00:39:46.818220] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:24:20.895 [2024-05-15 00:39:46.818225] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:24:20.895 [2024-05-15 00:39:46.818230] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:24:20.895 [2024-05-15 00:39:46.818236] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.895 [2024-05-15 00:39:46.818242] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:20.895 [2024-05-15 00:39:46.818336] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.895 [2024-05-15 00:39:46.818342] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.895 [2024-05-15 00:39:46.818346] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818350] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:24:20.895 [2024-05-15 00:39:46.818357] nvme_ctrlr.c:2903:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:20.895 [2024-05-15 00:39:46.818363] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:20.895 [2024-05-15 00:39:46.818372] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:20.895 [2024-05-15 00:39:46.818380] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:20.895 [2024-05-15 00:39:46.818390] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818398] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818403] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:24:20.895 [2024-05-15 00:39:46.818412] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:20.895 [2024-05-15 00:39:46.818422] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:20.895 [2024-05-15 00:39:46.818495] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.895 [2024-05-15 00:39:46.818501] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.895 [2024-05-15 00:39:46.818505] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818510] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:24:20.895 [2024-05-15 00:39:46.818566] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:20.895 [2024-05-15 00:39:46.818578] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:20.895 [2024-05-15 00:39:46.818588] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818594] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:24:20.895 [2024-05-15 00:39:46.818604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.895 [2024-05-15 00:39:46.818614] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:20.895 [2024-05-15 00:39:46.818701] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.895 [2024-05-15 00:39:46.818707] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.895 [2024-05-15 00:39:46.818711] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818716] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=4 00:24:20.895 [2024-05-15 00:39:46.818722] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:24:20.895 [2024-05-15 00:39:46.818727] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818739] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.818743] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.864559] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.895 [2024-05-15 00:39:46.864571] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.895 [2024-05-15 00:39:46.864576] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.864581] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:24:20.895 [2024-05-15 00:39:46.864608] nvme_ctrlr.c:4558:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:20.895 [2024-05-15 00:39:46.864624] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:20.895 [2024-05-15 00:39:46.864634] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:20.895 [2024-05-15 00:39:46.864645] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.864650] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:24:20.895 [2024-05-15 00:39:46.864661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.895 [2024-05-15 00:39:46.864675] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:20.895 [2024-05-15 00:39:46.864781] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.895 [2024-05-15 00:39:46.864790] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.895 [2024-05-15 00:39:46.864794] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.864799] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=4 00:24:20.895 [2024-05-15 00:39:46.864805] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:24:20.895 [2024-05-15 00:39:46.864809] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.864821] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.895 [2024-05-15 00:39:46.864825] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.906837] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.896 [2024-05-15 00:39:46.906852] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.896 [2024-05-15 00:39:46.906857] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.906862] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:24:20.896 [2024-05-15 00:39:46.906882] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:20.896 [2024-05-15 00:39:46.906893] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:20.896 [2024-05-15 00:39:46.906904] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.906910] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:24:20.896 [2024-05-15 00:39:46.906920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.896 [2024-05-15 00:39:46.906935] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:20.896 [2024-05-15 00:39:46.907028] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.896 [2024-05-15 00:39:46.907035] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.896 [2024-05-15 00:39:46.907039] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.907043] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=4 00:24:20.896 [2024-05-15 00:39:46.907049] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:24:20.896 [2024-05-15 00:39:46.907054] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.907066] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.907070] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.952562] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.896 [2024-05-15 00:39:46.952577] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.896 [2024-05-15 00:39:46.952581] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.952586] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:24:20.896 [2024-05-15 00:39:46.952601] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:20.896 [2024-05-15 00:39:46.952610] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:20.896 [2024-05-15 00:39:46.952620] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:20.896 [2024-05-15 00:39:46.952628] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:20.896 [2024-05-15 00:39:46.952635] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:20.896 [2024-05-15 00:39:46.952641] nvme_ctrlr.c:2991:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:20.896 [2024-05-15 00:39:46.952647] nvme_ctrlr.c:1485:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:20.896 [2024-05-15 00:39:46.952654] nvme_ctrlr.c:1491:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:20.896 [2024-05-15 00:39:46.952682] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.952688] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:24:20.896 [2024-05-15 00:39:46.952702] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.896 [2024-05-15 00:39:46.952716] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.952721] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.952726] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:24:20.896 [2024-05-15 00:39:46.952735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:20.896 [2024-05-15 00:39:46.952749] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:20.896 [2024-05-15 00:39:46.952755] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:24:20.896 [2024-05-15 00:39:46.952844] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.896 [2024-05-15 00:39:46.952852] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.896 [2024-05-15 00:39:46.952857] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.952863] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:24:20.896 [2024-05-15 00:39:46.952874] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.896 [2024-05-15 00:39:46.952882] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.896 [2024-05-15 00:39:46.952886] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.952891] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:24:20.896 [2024-05-15 00:39:46.952899] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.952904] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:24:20.896 [2024-05-15 00:39:46.952912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.896 [2024-05-15 00:39:46.952921] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:24:20.896 [2024-05-15 00:39:46.952993] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.896 [2024-05-15 00:39:46.953000] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.896 [2024-05-15 00:39:46.953004] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953008] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:24:20.896 [2024-05-15 00:39:46.953017] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953022] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:24:20.896 [2024-05-15 00:39:46.953030] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.896 [2024-05-15 00:39:46.953039] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:24:20.896 [2024-05-15 00:39:46.953114] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.896 [2024-05-15 00:39:46.953121] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.896 [2024-05-15 00:39:46.953124] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953129] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:24:20.896 [2024-05-15 00:39:46.953137] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953142] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:24:20.896 [2024-05-15 00:39:46.953150] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.896 [2024-05-15 00:39:46.953159] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:24:20.896 [2024-05-15 00:39:46.953224] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.896 [2024-05-15 00:39:46.953231] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.896 [2024-05-15 00:39:46.953235] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953239] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:24:20.896 [2024-05-15 00:39:46.953256] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953261] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x615000024980) 00:24:20.896 [2024-05-15 00:39:46.953272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.896 [2024-05-15 00:39:46.953282] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953287] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x615000024980) 00:24:20.896 [2024-05-15 00:39:46.953296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.896 [2024-05-15 00:39:46.953306] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953311] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x615000024980) 00:24:20.896 [2024-05-15 00:39:46.953320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.896 [2024-05-15 00:39:46.953330] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953335] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000024980) 00:24:20.896 [2024-05-15 00:39:46.953345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.896 [2024-05-15 00:39:46.953356] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:24:20.896 [2024-05-15 00:39:46.953364] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:24:20.896 [2024-05-15 00:39:46.953369] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:24:20.896 [2024-05-15 00:39:46.953379] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:24:20.896 [2024-05-15 00:39:46.953536] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.896 [2024-05-15 00:39:46.953543] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.896 [2024-05-15 00:39:46.953548] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953557] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=8192, cccid=5 00:24:20.896 [2024-05-15 00:39:46.953563] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x615000024980): expected_datao=0, payload_size=8192 00:24:20.896 [2024-05-15 00:39:46.953569] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953585] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953590] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953597] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.896 [2024-05-15 00:39:46.953604] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.896 [2024-05-15 00:39:46.953608] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.896 [2024-05-15 00:39:46.953612] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=512, cccid=4 00:24:20.896 [2024-05-15 00:39:46.953618] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x615000024980): expected_datao=0, payload_size=512 00:24:20.896 [2024-05-15 00:39:46.953622] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953629] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953633] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953643] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.897 [2024-05-15 00:39:46.953649] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.897 [2024-05-15 00:39:46.953653] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953657] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=512, cccid=6 00:24:20.897 [2024-05-15 00:39:46.953662] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x615000024980): expected_datao=0, payload_size=512 00:24:20.897 [2024-05-15 00:39:46.953667] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953674] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953677] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953685] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:20.897 [2024-05-15 00:39:46.953691] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:20.897 [2024-05-15 00:39:46.953695] nvme_tcp.c:1710:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953700] nvme_tcp.c:1711:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x615000024980): datao=0, datal=4096, cccid=7 00:24:20.897 [2024-05-15 00:39:46.953704] nvme_tcp.c:1722:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x615000024980): expected_datao=0, payload_size=4096 00:24:20.897 [2024-05-15 00:39:46.953709] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953716] nvme_tcp.c:1512:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953720] nvme_tcp.c:1296:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953728] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.897 [2024-05-15 00:39:46.953734] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.897 [2024-05-15 00:39:46.953738] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953743] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x615000024980 00:24:20.897 [2024-05-15 00:39:46.953761] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.897 [2024-05-15 00:39:46.953767] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.897 [2024-05-15 00:39:46.953771] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953775] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x615000024980 00:24:20.897 [2024-05-15 00:39:46.953786] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.897 [2024-05-15 00:39:46.953792] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.897 [2024-05-15 00:39:46.953796] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953800] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x615000024980 00:24:20.897 [2024-05-15 00:39:46.953812] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.897 [2024-05-15 00:39:46.953818] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.897 [2024-05-15 00:39:46.953822] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.897 [2024-05-15 00:39:46.953826] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x615000024980 00:24:20.897 ===================================================== 00:24:20.897 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.897 ===================================================== 00:24:20.897 Controller Capabilities/Features 00:24:20.897 ================================ 00:24:20.897 Vendor ID: 8086 00:24:20.897 Subsystem Vendor ID: 8086 00:24:20.897 Serial Number: SPDK00000000000001 00:24:20.897 Model Number: SPDK bdev Controller 00:24:20.897 Firmware Version: 24.05 00:24:20.897 Recommended Arb Burst: 6 00:24:20.897 IEEE OUI Identifier: e4 d2 5c 00:24:20.897 Multi-path I/O 00:24:20.897 May have multiple subsystem ports: Yes 00:24:20.897 May have multiple controllers: Yes 00:24:20.897 Associated with SR-IOV VF: No 00:24:20.897 Max Data Transfer Size: 131072 00:24:20.897 Max Number of Namespaces: 32 00:24:20.897 Max Number of I/O Queues: 127 00:24:20.897 NVMe Specification Version (VS): 1.3 00:24:20.897 NVMe Specification Version (Identify): 1.3 00:24:20.897 Maximum Queue Entries: 128 00:24:20.897 Contiguous Queues Required: Yes 00:24:20.897 Arbitration Mechanisms Supported 00:24:20.897 Weighted Round Robin: Not Supported 00:24:20.897 Vendor Specific: Not Supported 00:24:20.897 Reset Timeout: 15000 ms 00:24:20.897 Doorbell Stride: 4 bytes 00:24:20.897 NVM Subsystem Reset: Not Supported 00:24:20.897 Command Sets Supported 00:24:20.897 NVM Command Set: Supported 00:24:20.897 Boot Partition: Not Supported 00:24:20.897 Memory Page Size Minimum: 4096 bytes 00:24:20.897 Memory Page Size Maximum: 4096 bytes 00:24:20.897 Persistent Memory Region: Not Supported 00:24:20.897 Optional Asynchronous Events Supported 00:24:20.897 Namespace Attribute Notices: Supported 00:24:20.897 Firmware Activation Notices: Not Supported 00:24:20.897 ANA Change Notices: Not Supported 00:24:20.897 PLE Aggregate Log Change Notices: Not Supported 00:24:20.897 LBA Status Info Alert Notices: Not Supported 00:24:20.897 EGE Aggregate Log Change Notices: Not Supported 00:24:20.897 Normal NVM Subsystem Shutdown event: Not Supported 00:24:20.897 Zone Descriptor Change Notices: Not Supported 00:24:20.897 Discovery Log Change Notices: Not Supported 00:24:20.897 Controller Attributes 00:24:20.897 128-bit Host Identifier: Supported 00:24:20.897 Non-Operational Permissive Mode: Not Supported 00:24:20.897 NVM Sets: Not Supported 00:24:20.897 Read Recovery Levels: Not Supported 00:24:20.897 Endurance Groups: Not Supported 00:24:20.897 Predictable Latency Mode: Not Supported 00:24:20.897 Traffic Based Keep ALive: Not Supported 00:24:20.897 Namespace Granularity: Not Supported 00:24:20.897 SQ Associations: Not Supported 00:24:20.897 UUID List: Not Supported 00:24:20.897 Multi-Domain Subsystem: Not Supported 00:24:20.897 Fixed Capacity Management: Not Supported 00:24:20.897 Variable Capacity Management: Not Supported 00:24:20.897 Delete Endurance Group: Not Supported 00:24:20.897 Delete NVM Set: Not Supported 00:24:20.897 Extended LBA Formats Supported: Not Supported 00:24:20.897 Flexible Data Placement Supported: Not Supported 00:24:20.897 00:24:20.897 Controller Memory Buffer Support 00:24:20.897 ================================ 00:24:20.897 Supported: No 00:24:20.897 00:24:20.897 Persistent Memory Region Support 00:24:20.897 ================================ 00:24:20.897 Supported: No 00:24:20.897 00:24:20.897 Admin Command Set Attributes 00:24:20.897 ============================ 00:24:20.897 Security Send/Receive: Not Supported 00:24:20.897 Format NVM: Not Supported 00:24:20.897 Firmware Activate/Download: Not Supported 00:24:20.897 Namespace Management: Not Supported 00:24:20.897 Device Self-Test: Not Supported 00:24:20.897 Directives: Not Supported 00:24:20.897 NVMe-MI: Not Supported 00:24:20.897 Virtualization Management: Not Supported 00:24:20.897 Doorbell Buffer Config: Not Supported 00:24:20.897 Get LBA Status Capability: Not Supported 00:24:20.897 Command & Feature Lockdown Capability: Not Supported 00:24:20.897 Abort Command Limit: 4 00:24:20.897 Async Event Request Limit: 4 00:24:20.897 Number of Firmware Slots: N/A 00:24:20.897 Firmware Slot 1 Read-Only: N/A 00:24:20.897 Firmware Activation Without Reset: N/A 00:24:20.897 Multiple Update Detection Support: N/A 00:24:20.897 Firmware Update Granularity: No Information Provided 00:24:20.897 Per-Namespace SMART Log: No 00:24:20.897 Asymmetric Namespace Access Log Page: Not Supported 00:24:20.897 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:20.897 Command Effects Log Page: Supported 00:24:20.897 Get Log Page Extended Data: Supported 00:24:20.897 Telemetry Log Pages: Not Supported 00:24:20.897 Persistent Event Log Pages: Not Supported 00:24:20.897 Supported Log Pages Log Page: May Support 00:24:20.897 Commands Supported & Effects Log Page: Not Supported 00:24:20.897 Feature Identifiers & Effects Log Page:May Support 00:24:20.897 NVMe-MI Commands & Effects Log Page: May Support 00:24:20.897 Data Area 4 for Telemetry Log: Not Supported 00:24:20.897 Error Log Page Entries Supported: 128 00:24:20.897 Keep Alive: Supported 00:24:20.897 Keep Alive Granularity: 10000 ms 00:24:20.897 00:24:20.897 NVM Command Set Attributes 00:24:20.897 ========================== 00:24:20.897 Submission Queue Entry Size 00:24:20.897 Max: 64 00:24:20.897 Min: 64 00:24:20.897 Completion Queue Entry Size 00:24:20.897 Max: 16 00:24:20.897 Min: 16 00:24:20.897 Number of Namespaces: 32 00:24:20.897 Compare Command: Supported 00:24:20.897 Write Uncorrectable Command: Not Supported 00:24:20.897 Dataset Management Command: Supported 00:24:20.897 Write Zeroes Command: Supported 00:24:20.897 Set Features Save Field: Not Supported 00:24:20.897 Reservations: Supported 00:24:20.897 Timestamp: Not Supported 00:24:20.897 Copy: Supported 00:24:20.897 Volatile Write Cache: Present 00:24:20.897 Atomic Write Unit (Normal): 1 00:24:20.897 Atomic Write Unit (PFail): 1 00:24:20.897 Atomic Compare & Write Unit: 1 00:24:20.897 Fused Compare & Write: Supported 00:24:20.897 Scatter-Gather List 00:24:20.897 SGL Command Set: Supported 00:24:20.897 SGL Keyed: Supported 00:24:20.897 SGL Bit Bucket Descriptor: Not Supported 00:24:20.897 SGL Metadata Pointer: Not Supported 00:24:20.897 Oversized SGL: Not Supported 00:24:20.897 SGL Metadata Address: Not Supported 00:24:20.897 SGL Offset: Supported 00:24:20.897 Transport SGL Data Block: Not Supported 00:24:20.897 Replay Protected Memory Block: Not Supported 00:24:20.897 00:24:20.897 Firmware Slot Information 00:24:20.897 ========================= 00:24:20.897 Active slot: 1 00:24:20.897 Slot 1 Firmware Revision: 24.05 00:24:20.898 00:24:20.898 00:24:20.898 Commands Supported and Effects 00:24:20.898 ============================== 00:24:20.898 Admin Commands 00:24:20.898 -------------- 00:24:20.898 Get Log Page (02h): Supported 00:24:20.898 Identify (06h): Supported 00:24:20.898 Abort (08h): Supported 00:24:20.898 Set Features (09h): Supported 00:24:20.898 Get Features (0Ah): Supported 00:24:20.898 Asynchronous Event Request (0Ch): Supported 00:24:20.898 Keep Alive (18h): Supported 00:24:20.898 I/O Commands 00:24:20.898 ------------ 00:24:20.898 Flush (00h): Supported LBA-Change 00:24:20.898 Write (01h): Supported LBA-Change 00:24:20.898 Read (02h): Supported 00:24:20.898 Compare (05h): Supported 00:24:20.898 Write Zeroes (08h): Supported LBA-Change 00:24:20.898 Dataset Management (09h): Supported LBA-Change 00:24:20.898 Copy (19h): Supported LBA-Change 00:24:20.898 Unknown (79h): Supported LBA-Change 00:24:20.898 Unknown (7Ah): Supported 00:24:20.898 00:24:20.898 Error Log 00:24:20.898 ========= 00:24:20.898 00:24:20.898 Arbitration 00:24:20.898 =========== 00:24:20.898 Arbitration Burst: 1 00:24:20.898 00:24:20.898 Power Management 00:24:20.898 ================ 00:24:20.898 Number of Power States: 1 00:24:20.898 Current Power State: Power State #0 00:24:20.898 Power State #0: 00:24:20.898 Max Power: 0.00 W 00:24:20.898 Non-Operational State: Operational 00:24:20.898 Entry Latency: Not Reported 00:24:20.898 Exit Latency: Not Reported 00:24:20.898 Relative Read Throughput: 0 00:24:20.898 Relative Read Latency: 0 00:24:20.898 Relative Write Throughput: 0 00:24:20.898 Relative Write Latency: 0 00:24:20.898 Idle Power: Not Reported 00:24:20.898 Active Power: Not Reported 00:24:20.898 Non-Operational Permissive Mode: Not Supported 00:24:20.898 00:24:20.898 Health Information 00:24:20.898 ================== 00:24:20.898 Critical Warnings: 00:24:20.898 Available Spare Space: OK 00:24:20.898 Temperature: OK 00:24:20.898 Device Reliability: OK 00:24:20.898 Read Only: No 00:24:20.898 Volatile Memory Backup: OK 00:24:20.898 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:20.898 Temperature Threshold: [2024-05-15 00:39:46.953958] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.953967] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x615000024980) 00:24:20.898 [2024-05-15 00:39:46.953976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.898 [2024-05-15 00:39:46.953988] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:24:20.898 [2024-05-15 00:39:46.954063] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.898 [2024-05-15 00:39:46.954071] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.898 [2024-05-15 00:39:46.954075] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954081] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x615000024980 00:24:20.898 [2024-05-15 00:39:46.954119] nvme_ctrlr.c:4222:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:20.898 [2024-05-15 00:39:46.954132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.898 [2024-05-15 00:39:46.954140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.898 [2024-05-15 00:39:46.954147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.898 [2024-05-15 00:39:46.954155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:20.898 [2024-05-15 00:39:46.954164] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954169] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954176] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.898 [2024-05-15 00:39:46.954189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.898 [2024-05-15 00:39:46.954201] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.898 [2024-05-15 00:39:46.954271] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.898 [2024-05-15 00:39:46.954278] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.898 [2024-05-15 00:39:46.954283] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954288] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.898 [2024-05-15 00:39:46.954298] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954303] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954311] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.898 [2024-05-15 00:39:46.954321] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.898 [2024-05-15 00:39:46.954334] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.898 [2024-05-15 00:39:46.954419] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.898 [2024-05-15 00:39:46.954426] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.898 [2024-05-15 00:39:46.954430] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954434] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.898 [2024-05-15 00:39:46.954440] nvme_ctrlr.c:1083:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:20.898 [2024-05-15 00:39:46.954447] nvme_ctrlr.c:1086:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:20.898 [2024-05-15 00:39:46.954460] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954465] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954470] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.898 [2024-05-15 00:39:46.954479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.898 [2024-05-15 00:39:46.954491] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.898 [2024-05-15 00:39:46.954562] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.898 [2024-05-15 00:39:46.954568] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.898 [2024-05-15 00:39:46.954573] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954577] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.898 [2024-05-15 00:39:46.954587] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954591] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954596] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.898 [2024-05-15 00:39:46.954604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.898 [2024-05-15 00:39:46.954614] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.898 [2024-05-15 00:39:46.954680] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.898 [2024-05-15 00:39:46.954687] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.898 [2024-05-15 00:39:46.954691] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954695] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.898 [2024-05-15 00:39:46.954705] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954709] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954714] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.898 [2024-05-15 00:39:46.954723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.898 [2024-05-15 00:39:46.954733] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.898 [2024-05-15 00:39:46.954809] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.898 [2024-05-15 00:39:46.954815] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.898 [2024-05-15 00:39:46.954819] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.898 [2024-05-15 00:39:46.954824] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.898 [2024-05-15 00:39:46.954833] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.954838] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.954842] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.899 [2024-05-15 00:39:46.954850] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.899 [2024-05-15 00:39:46.954859] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.899 [2024-05-15 00:39:46.954929] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.899 [2024-05-15 00:39:46.954935] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.899 [2024-05-15 00:39:46.954939] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.954943] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.899 [2024-05-15 00:39:46.954953] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.954957] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.954962] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.899 [2024-05-15 00:39:46.954970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.899 [2024-05-15 00:39:46.954980] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.899 [2024-05-15 00:39:46.955042] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.899 [2024-05-15 00:39:46.955049] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.899 [2024-05-15 00:39:46.955053] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955057] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.899 [2024-05-15 00:39:46.955070] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955074] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955079] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.899 [2024-05-15 00:39:46.955087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.899 [2024-05-15 00:39:46.955097] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.899 [2024-05-15 00:39:46.955163] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.899 [2024-05-15 00:39:46.955169] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.899 [2024-05-15 00:39:46.955173] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955178] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.899 [2024-05-15 00:39:46.955187] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955191] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955195] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.899 [2024-05-15 00:39:46.955204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.899 [2024-05-15 00:39:46.955213] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.899 [2024-05-15 00:39:46.955284] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.899 [2024-05-15 00:39:46.955291] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.899 [2024-05-15 00:39:46.955295] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955299] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.899 [2024-05-15 00:39:46.955308] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955313] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955318] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.899 [2024-05-15 00:39:46.955329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.899 [2024-05-15 00:39:46.955339] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.899 [2024-05-15 00:39:46.955405] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.899 [2024-05-15 00:39:46.955411] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.899 [2024-05-15 00:39:46.955415] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955420] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.899 [2024-05-15 00:39:46.955429] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955434] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955438] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.899 [2024-05-15 00:39:46.955446] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.899 [2024-05-15 00:39:46.955456] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.899 [2024-05-15 00:39:46.955528] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.899 [2024-05-15 00:39:46.955535] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.899 [2024-05-15 00:39:46.955539] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955543] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.899 [2024-05-15 00:39:46.955556] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955560] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955564] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.899 [2024-05-15 00:39:46.955572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.899 [2024-05-15 00:39:46.955582] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.899 [2024-05-15 00:39:46.955642] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.899 [2024-05-15 00:39:46.955648] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.899 [2024-05-15 00:39:46.955652] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955657] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.899 [2024-05-15 00:39:46.955666] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955670] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955675] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.899 [2024-05-15 00:39:46.955683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.899 [2024-05-15 00:39:46.955692] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.899 [2024-05-15 00:39:46.955753] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.899 [2024-05-15 00:39:46.955759] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.899 [2024-05-15 00:39:46.955763] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955767] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.899 [2024-05-15 00:39:46.955776] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955781] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955785] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.899 [2024-05-15 00:39:46.955793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.899 [2024-05-15 00:39:46.955802] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.899 [2024-05-15 00:39:46.955870] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.899 [2024-05-15 00:39:46.955877] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.899 [2024-05-15 00:39:46.955881] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955885] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.899 [2024-05-15 00:39:46.955894] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955899] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.955903] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.899 [2024-05-15 00:39:46.955910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.899 [2024-05-15 00:39:46.955920] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.899 [2024-05-15 00:39:46.955996] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.899 [2024-05-15 00:39:46.956002] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.899 [2024-05-15 00:39:46.956006] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.956011] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.899 [2024-05-15 00:39:46.956020] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.956025] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.956029] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.899 [2024-05-15 00:39:46.956037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.899 [2024-05-15 00:39:46.956047] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.899 [2024-05-15 00:39:46.956110] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.899 [2024-05-15 00:39:46.956116] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.899 [2024-05-15 00:39:46.956120] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.956125] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.899 [2024-05-15 00:39:46.956134] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.956139] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.956143] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.899 [2024-05-15 00:39:46.956151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.899 [2024-05-15 00:39:46.956160] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.899 [2024-05-15 00:39:46.956220] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.899 [2024-05-15 00:39:46.956226] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.899 [2024-05-15 00:39:46.956230] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.956234] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.899 [2024-05-15 00:39:46.956244] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.899 [2024-05-15 00:39:46.956248] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.900 [2024-05-15 00:39:46.956253] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.900 [2024-05-15 00:39:46.956262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.900 [2024-05-15 00:39:46.956271] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.900 [2024-05-15 00:39:46.956334] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.900 [2024-05-15 00:39:46.956340] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.900 [2024-05-15 00:39:46.956344] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.900 [2024-05-15 00:39:46.956348] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.900 [2024-05-15 00:39:46.956358] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.900 [2024-05-15 00:39:46.956362] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.900 [2024-05-15 00:39:46.956367] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.900 [2024-05-15 00:39:46.956374] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.900 [2024-05-15 00:39:46.956384] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.900 [2024-05-15 00:39:46.956453] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.900 [2024-05-15 00:39:46.956459] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.900 [2024-05-15 00:39:46.956463] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.900 [2024-05-15 00:39:46.956467] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.900 [2024-05-15 00:39:46.956477] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.900 [2024-05-15 00:39:46.956481] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.900 [2024-05-15 00:39:46.956486] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.900 [2024-05-15 00:39:46.956493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.900 [2024-05-15 00:39:46.956503] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.900 [2024-05-15 00:39:46.960559] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.900 [2024-05-15 00:39:46.960568] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.900 [2024-05-15 00:39:46.960572] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.900 [2024-05-15 00:39:46.960577] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.900 [2024-05-15 00:39:46.960586] nvme_tcp.c: 767:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:20.900 [2024-05-15 00:39:46.960591] nvme_tcp.c: 950:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:20.900 [2024-05-15 00:39:46.960596] nvme_tcp.c: 959:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x615000024980) 00:24:20.900 [2024-05-15 00:39:46.960604] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.900 [2024-05-15 00:39:46.960615] nvme_tcp.c: 924:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:24:20.900 [2024-05-15 00:39:46.960686] nvme_tcp.c:1164:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:20.900 [2024-05-15 00:39:46.960692] nvme_tcp.c:1966:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:20.900 [2024-05-15 00:39:46.960696] nvme_tcp.c:1639:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:20.900 [2024-05-15 00:39:46.960701] nvme_tcp.c: 909:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x615000024980 00:24:20.900 [2024-05-15 00:39:46.960709] nvme_ctrlr.c:1205:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:20.900 0 Kelvin (-273 Celsius) 00:24:20.900 Available Spare: 0% 00:24:20.900 Available Spare Threshold: 0% 00:24:20.900 Life Percentage Used: 0% 00:24:20.900 Data Units Read: 0 00:24:20.900 Data Units Written: 0 00:24:20.900 Host Read Commands: 0 00:24:20.900 Host Write Commands: 0 00:24:20.900 Controller Busy Time: 0 minutes 00:24:20.900 Power Cycles: 0 00:24:20.900 Power On Hours: 0 hours 00:24:20.900 Unsafe Shutdowns: 0 00:24:20.900 Unrecoverable Media Errors: 0 00:24:20.900 Lifetime Error Log Entries: 0 00:24:20.900 Warning Temperature Time: 0 minutes 00:24:20.900 Critical Temperature Time: 0 minutes 00:24:20.900 00:24:20.900 Number of Queues 00:24:20.900 ================ 00:24:20.900 Number of I/O Submission Queues: 127 00:24:20.900 Number of I/O Completion Queues: 127 00:24:20.900 00:24:20.900 Active Namespaces 00:24:20.900 ================= 00:24:20.900 Namespace ID:1 00:24:20.900 Error Recovery Timeout: Unlimited 00:24:20.900 Command Set Identifier: NVM (00h) 00:24:20.900 Deallocate: Supported 00:24:20.900 Deallocated/Unwritten Error: Not Supported 00:24:20.900 Deallocated Read Value: Unknown 00:24:20.900 Deallocate in Write Zeroes: Not Supported 00:24:20.900 Deallocated Guard Field: 0xFFFF 00:24:20.900 Flush: Supported 00:24:20.900 Reservation: Supported 00:24:20.900 Namespace Sharing Capabilities: Multiple Controllers 00:24:20.900 Size (in LBAs): 131072 (0GiB) 00:24:20.900 Capacity (in LBAs): 131072 (0GiB) 00:24:20.900 Utilization (in LBAs): 131072 (0GiB) 00:24:20.900 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:20.900 EUI64: ABCDEF0123456789 00:24:20.900 UUID: 2e143035-ca2c-4110-9d3e-687456871066 00:24:20.900 Thin Provisioning: Not Supported 00:24:20.900 Per-NS Atomic Units: Yes 00:24:20.900 Atomic Boundary Size (Normal): 0 00:24:20.900 Atomic Boundary Size (PFail): 0 00:24:20.900 Atomic Boundary Offset: 0 00:24:20.900 Maximum Single Source Range Length: 65535 00:24:20.900 Maximum Copy Length: 65535 00:24:20.900 Maximum Source Range Count: 1 00:24:20.900 NGUID/EUI64 Never Reused: No 00:24:20.900 Namespace Write Protected: No 00:24:20.900 Number of LBA Formats: 1 00:24:20.900 Current LBA Format: LBA Format #00 00:24:20.900 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:20.900 00:24:20.900 00:39:46 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:20.900 00:39:46 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:20.900 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@560 -- # xtrace_disable 00:24:20.900 00:39:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:20.900 00:39:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:24:20.900 00:39:47 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:20.900 00:39:47 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:20.900 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:20.900 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:20.900 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:20.900 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:20.900 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:20.900 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:20.900 rmmod nvme_tcp 00:24:20.900 rmmod nvme_fabrics 00:24:21.159 rmmod nvme_keyring 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2093467 ']' 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2093467 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@947 -- # '[' -z 2093467 ']' 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@951 -- # kill -0 2093467 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # uname 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2093467 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2093467' 00:24:21.159 killing process with pid 2093467 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # kill 2093467 00:24:21.159 [2024-05-15 00:39:47.133904] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:21.159 00:39:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@971 -- # wait 2093467 00:24:21.728 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:21.728 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:21.728 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:21.728 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:21.728 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:21.728 00:39:47 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.728 00:39:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:21.728 00:39:47 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.634 00:39:49 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:23.634 00:24:23.634 real 0m9.414s 00:24:23.634 user 0m8.317s 00:24:23.634 sys 0m4.297s 00:24:23.634 00:39:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:23.634 00:39:49 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:23.634 ************************************ 00:24:23.634 END TEST nvmf_identify 00:24:23.634 ************************************ 00:24:23.893 00:39:49 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:23.893 00:39:49 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:23.893 00:39:49 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:23.893 00:39:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:23.893 ************************************ 00:24:23.893 START TEST nvmf_perf 00:24:23.893 ************************************ 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:23.893 * Looking for test storage... 00:24:23.893 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.893 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:23.894 00:39:49 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:30.463 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:30.463 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:30.463 Found net devices under 0000:27:00.0: cvl_0_0 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:30.463 Found net devices under 0000:27:00.1: cvl_0_1 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:30.463 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:30.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:24:30.464 00:24:30.464 --- 10.0.0.2 ping statistics --- 00:24:30.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.464 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:24:30.464 00:24:30.464 --- 10.0.0.1 ping statistics --- 00:24:30.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.464 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:30.464 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2098034 00:24:30.723 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2098034 00:24:30.723 00:39:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@828 -- # '[' -z 2098034 ']' 00:24:30.723 00:39:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.723 00:39:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:30.723 00:39:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.723 00:39:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:30.723 00:39:56 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:30.723 00:39:56 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:30.723 [2024-05-15 00:39:56.717156] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:24:30.723 [2024-05-15 00:39:56.717286] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.723 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.723 [2024-05-15 00:39:56.856861] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:30.982 [2024-05-15 00:39:56.953785] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.982 [2024-05-15 00:39:56.953833] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.982 [2024-05-15 00:39:56.953843] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.982 [2024-05-15 00:39:56.953853] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.982 [2024-05-15 00:39:56.953860] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.982 [2024-05-15 00:39:56.954061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.982 [2024-05-15 00:39:56.954134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:30.982 [2024-05-15 00:39:56.954236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.982 [2024-05-15 00:39:56.954245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:31.617 00:39:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:24:31.617 00:39:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@861 -- # return 0 00:24:31.617 00:39:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:31.617 00:39:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@727 -- # xtrace_disable 00:24:31.617 00:39:57 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:31.617 00:39:57 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.617 00:39:57 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:24:31.617 00:39:57 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:38.186 00:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:24:38.186 00:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:24:38.186 00:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:c9:00.0 00:24:38.186 00:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:38.186 00:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:24:38.186 00:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:c9:00.0 ']' 00:24:38.186 00:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:24:38.186 00:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:24:38.186 00:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:38.186 [2024-05-15 00:40:03.748679] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:38.187 00:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:38.187 00:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:38.187 00:40:03 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:38.187 00:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:24:38.187 00:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:24:38.187 00:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.187 [2024-05-15 00:40:04.331094] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:24:38.187 [2024-05-15 00:40:04.331469] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.444 00:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:38.444 00:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:c9:00.0 ']' 00:24:38.444 00:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:c9:00.0' 00:24:38.444 00:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:24:38.444 00:40:04 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:c9:00.0' 00:24:39.816 Initializing NVMe Controllers 00:24:39.816 Attached to NVMe Controller at 0000:c9:00.0 [8086:0a54] 00:24:39.816 Associating PCIE (0000:c9:00.0) NSID 1 with lcore 0 00:24:39.816 Initialization complete. Launching workers. 00:24:39.816 ======================================================== 00:24:39.816 Latency(us) 00:24:39.816 Device Information : IOPS MiB/s Average min max 00:24:39.816 PCIE (0000:c9:00.0) NSID 1 from core 0: 94396.45 368.74 338.55 25.89 5239.11 00:24:39.816 ======================================================== 00:24:39.816 Total : 94396.45 368.74 338.55 25.89 5239.11 00:24:39.816 00:24:39.816 00:40:05 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:40.074 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.448 Initializing NVMe Controllers 00:24:41.448 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:41.448 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:41.448 Initialization complete. Launching workers. 00:24:41.448 ======================================================== 00:24:41.448 Latency(us) 00:24:41.448 Device Information : IOPS MiB/s Average min max 00:24:41.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 98.00 0.38 10366.68 94.09 45842.30 00:24:41.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17998.49 7953.27 48006.25 00:24:41.448 ======================================================== 00:24:41.448 Total : 154.00 0.60 13141.88 94.09 48006.25 00:24:41.448 00:24:41.448 00:40:07 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:41.448 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.348 Initializing NVMe Controllers 00:24:43.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:43.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:43.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:43.348 Initialization complete. Launching workers. 00:24:43.348 ======================================================== 00:24:43.348 Latency(us) 00:24:43.348 Device Information : IOPS MiB/s Average min max 00:24:43.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11636.16 45.45 2750.34 385.98 8995.78 00:24:43.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3780.43 14.77 8595.75 7088.47 47773.55 00:24:43.348 ======================================================== 00:24:43.348 Total : 15416.59 60.22 4183.74 385.98 47773.55 00:24:43.348 00:24:43.348 00:40:09 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:24:43.348 00:40:09 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:43.348 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.883 Initializing NVMe Controllers 00:24:45.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:45.883 Controller IO queue size 128, less than required. 00:24:45.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:45.883 Controller IO queue size 128, less than required. 00:24:45.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:45.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:45.883 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:45.883 Initialization complete. Launching workers. 00:24:45.883 ======================================================== 00:24:45.883 Latency(us) 00:24:45.883 Device Information : IOPS MiB/s Average min max 00:24:45.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2257.39 564.35 57831.53 26590.12 128721.71 00:24:45.883 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 571.35 142.84 236857.33 102505.66 359574.27 00:24:45.883 ======================================================== 00:24:45.883 Total : 2828.74 707.18 93991.01 26590.12 359574.27 00:24:45.883 00:24:45.883 00:40:11 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:45.883 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.883 No valid NVMe controllers or AIO or URING devices found 00:24:45.883 Initializing NVMe Controllers 00:24:45.883 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:45.883 Controller IO queue size 128, less than required. 00:24:45.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:45.883 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:45.883 Controller IO queue size 128, less than required. 00:24:45.883 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:45.883 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:45.883 WARNING: Some requested NVMe devices were skipped 00:24:46.142 00:40:12 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:46.142 EAL: No free 2048 kB hugepages reported on node 1 00:24:48.676 Initializing NVMe Controllers 00:24:48.676 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:48.676 Controller IO queue size 128, less than required. 00:24:48.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:48.676 Controller IO queue size 128, less than required. 00:24:48.676 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:48.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:48.676 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:48.676 Initialization complete. Launching workers. 00:24:48.676 00:24:48.676 ==================== 00:24:48.676 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:48.676 TCP transport: 00:24:48.676 polls: 15909 00:24:48.676 idle_polls: 9426 00:24:48.676 sock_completions: 6483 00:24:48.676 nvme_completions: 7849 00:24:48.676 submitted_requests: 11754 00:24:48.676 queued_requests: 1 00:24:48.676 00:24:48.676 ==================== 00:24:48.676 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:48.676 TCP transport: 00:24:48.676 polls: 19993 00:24:48.676 idle_polls: 11685 00:24:48.676 sock_completions: 8308 00:24:48.676 nvme_completions: 8429 00:24:48.676 submitted_requests: 12604 00:24:48.676 queued_requests: 1 00:24:48.676 ======================================================== 00:24:48.676 Latency(us) 00:24:48.676 Device Information : IOPS MiB/s Average min max 00:24:48.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1961.99 490.50 67608.41 32059.08 182441.26 00:24:48.676 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2106.99 526.75 60750.85 38629.91 120729.24 00:24:48.676 ======================================================== 00:24:48.676 Total : 4068.97 1017.24 64057.45 32059.08 182441.26 00:24:48.676 00:24:48.934 00:40:14 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:48.934 00:40:14 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:48.934 00:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:48.934 00:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:48.934 00:40:15 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:48.934 00:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:48.934 00:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:48.934 00:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:48.934 00:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:48.934 00:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:48.934 00:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:48.934 rmmod nvme_tcp 00:24:48.934 rmmod nvme_fabrics 00:24:49.194 rmmod nvme_keyring 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2098034 ']' 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2098034 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@947 -- # '[' -z 2098034 ']' 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@951 -- # kill -0 2098034 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # uname 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2098034 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2098034' 00:24:49.194 killing process with pid 2098034 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # kill 2098034 00:24:49.194 [2024-05-15 00:40:15.184560] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:24:49.194 00:40:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@971 -- # wait 2098034 00:24:52.477 00:40:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:52.477 00:40:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:52.477 00:40:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:52.477 00:40:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:52.477 00:40:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:52.477 00:40:18 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.477 00:40:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.477 00:40:18 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.379 00:40:20 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:54.379 00:24:54.379 real 0m30.484s 00:24:54.379 user 1m24.490s 00:24:54.379 sys 0m8.000s 00:24:54.379 00:40:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # xtrace_disable 00:24:54.379 00:40:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:54.379 ************************************ 00:24:54.379 END TEST nvmf_perf 00:24:54.379 ************************************ 00:24:54.379 00:40:20 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:54.379 00:40:20 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:24:54.379 00:40:20 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:24:54.379 00:40:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:54.379 ************************************ 00:24:54.379 START TEST nvmf_fio_host 00:24:54.379 ************************************ 00:24:54.379 00:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:54.379 * Looking for test storage... 00:24:54.379 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:24:54.379 00:40:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:54.379 00:40:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.379 00:40:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.379 00:40:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.379 00:40:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.379 00:40:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.379 00:40:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.379 00:40:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:54.379 00:40:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.379 00:40:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.379 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:54.379 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # nvmftestinit 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:54.380 00:40:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:59.651 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:59.651 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:59.651 Found net devices under 0000:27:00.0: cvl_0_0 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:59.651 Found net devices under 0000:27:00.1: cvl_0_1 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:59.651 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:59.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:59.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:24:59.910 00:24:59.910 --- 10.0.0.2 ping statistics --- 00:24:59.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.910 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:59.910 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:59.910 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:24:59.910 00:24:59.910 --- 10.0.0.1 ping statistics --- 00:24:59.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:59.910 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # [[ y != y ]] 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@22 -- # nvmfpid=2106018 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # waitforlisten 2106018 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@828 -- # '[' -z 2106018 ']' 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:59.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:24:59.910 00:40:25 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.910 [2024-05-15 00:40:26.030781] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:24:59.910 [2024-05-15 00:40:26.030880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:00.168 EAL: No free 2048 kB hugepages reported on node 1 00:25:00.168 [2024-05-15 00:40:26.152951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:00.168 [2024-05-15 00:40:26.252065] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:00.168 [2024-05-15 00:40:26.252101] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:00.168 [2024-05-15 00:40:26.252110] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:00.168 [2024-05-15 00:40:26.252119] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:00.168 [2024-05-15 00:40:26.252126] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:00.168 [2024-05-15 00:40:26.252317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.168 [2024-05-15 00:40:26.252395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.168 [2024-05-15 00:40:26.252495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:00.168 [2024-05-15 00:40:26.252504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@861 -- # return 0 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.734 [2024-05-15 00:40:26.752511] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.734 Malloc1 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.734 [2024-05-15 00:40:26.854144] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:00.734 [2024-05-15 00:40:26.854418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # break 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:00.734 00:40:26 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:01.309 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:01.309 fio-3.35 00:25:01.309 Starting 1 thread 00:25:01.309 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.832 00:25:03.832 test: (groupid=0, jobs=1): err= 0: pid=2106477: Wed May 15 00:40:29 2024 00:25:03.832 read: IOPS=12.2k, BW=47.8MiB/s (50.1MB/s)(95.8MiB/2005msec) 00:25:03.832 slat (nsec): min=1567, max=143583, avg=2529.37, stdev=1540.05 00:25:03.832 clat (usec): min=1922, max=9858, avg=5740.23, stdev=458.07 00:25:03.832 lat (usec): min=1942, max=9860, avg=5742.76, stdev=457.96 00:25:03.832 clat percentiles (usec): 00:25:03.832 | 1.00th=[ 4686], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:25:03.832 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5735], 60.00th=[ 5800], 00:25:03.832 | 70.00th=[ 5932], 80.00th=[ 6063], 90.00th=[ 6259], 95.00th=[ 6456], 00:25:03.832 | 99.00th=[ 7111], 99.50th=[ 7504], 99.90th=[ 8455], 99.95th=[ 9110], 00:25:03.832 | 99.99th=[ 9765] 00:25:03.832 bw ( KiB/s): min=47712, max=49864, per=99.97%, avg=48928.00, stdev=903.28, samples=4 00:25:03.832 iops : min=11928, max=12468, avg=12232.00, stdev=226.53, samples=4 00:25:03.832 write: IOPS=12.2k, BW=47.6MiB/s (50.0MB/s)(95.5MiB/2005msec); 0 zone resets 00:25:03.832 slat (nsec): min=1610, max=104273, avg=2646.28, stdev=942.46 00:25:03.832 clat (usec): min=1255, max=8914, avg=4684.91, stdev=381.70 00:25:03.832 lat (usec): min=1268, max=8916, avg=4687.56, stdev=381.63 00:25:03.832 clat percentiles (usec): 00:25:03.832 | 1.00th=[ 3851], 5.00th=[ 4146], 10.00th=[ 4293], 20.00th=[ 4424], 00:25:03.832 | 30.00th=[ 4490], 40.00th=[ 4621], 50.00th=[ 4686], 60.00th=[ 4752], 00:25:03.832 | 70.00th=[ 4817], 80.00th=[ 4948], 90.00th=[ 5080], 95.00th=[ 5211], 00:25:03.832 | 99.00th=[ 5800], 99.50th=[ 6128], 99.90th=[ 7177], 99.95th=[ 7701], 00:25:03.832 | 99.99th=[ 8848] 00:25:03.832 bw ( KiB/s): min=48384, max=49400, per=100.00%, avg=48788.00, stdev=493.26, samples=4 00:25:03.832 iops : min=12096, max=12350, avg=12197.00, stdev=123.32, samples=4 00:25:03.832 lat (msec) : 2=0.05%, 4=1.17%, 10=98.78% 00:25:03.832 cpu : usr=84.23%, sys=15.37%, ctx=5, majf=0, minf=1531 00:25:03.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:03.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:03.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:03.832 issued rwts: total=24533,24452,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:03.832 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:03.832 00:25:03.832 Run status group 0 (all jobs): 00:25:03.832 READ: bw=47.8MiB/s (50.1MB/s), 47.8MiB/s-47.8MiB/s (50.1MB/s-50.1MB/s), io=95.8MiB (100MB), run=2005-2005msec 00:25:03.832 WRITE: bw=47.6MiB/s (50.0MB/s), 47.6MiB/s-47.6MiB/s (50.0MB/s-50.0MB/s), io=95.5MiB (100MB), run=2005-2005msec 00:25:04.089 ----------------------------------------------------- 00:25:04.089 Suppressions used: 00:25:04.089 count bytes template 00:25:04.089 1 57 /usr/src/fio/parse.c 00:25:04.089 1 8 libtcmalloc_minimal.so 00:25:04.089 ----------------------------------------------------- 00:25:04.089 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1357 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1336 -- # local sanitizers 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1338 -- # shift 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local asan_lib= 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # grep libasan 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # break 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:04.089 00:40:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:04.347 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:04.347 fio-3.35 00:25:04.347 Starting 1 thread 00:25:04.660 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.191 00:25:07.191 test: (groupid=0, jobs=1): err= 0: pid=2107214: Wed May 15 00:40:32 2024 00:25:07.191 read: IOPS=9306, BW=145MiB/s (152MB/s)(292MiB/2006msec) 00:25:07.191 slat (usec): min=2, max=142, avg= 3.66, stdev= 1.84 00:25:07.191 clat (usec): min=2120, max=18710, avg=8314.06, stdev=2920.13 00:25:07.191 lat (usec): min=2123, max=18715, avg=8317.73, stdev=2920.87 00:25:07.191 clat percentiles (usec): 00:25:07.191 | 1.00th=[ 3621], 5.00th=[ 4359], 10.00th=[ 4883], 20.00th=[ 5669], 00:25:07.191 | 30.00th=[ 6390], 40.00th=[ 7242], 50.00th=[ 7832], 60.00th=[ 8586], 00:25:07.191 | 70.00th=[ 9503], 80.00th=[10814], 90.00th=[12780], 95.00th=[14091], 00:25:07.191 | 99.00th=[15664], 99.50th=[16188], 99.90th=[17957], 99.95th=[18482], 00:25:07.191 | 99.99th=[18744] 00:25:07.191 bw ( KiB/s): min=48032, max=91040, per=49.40%, avg=73552.00, stdev=20463.42, samples=4 00:25:07.191 iops : min= 3002, max= 5690, avg=4597.00, stdev=1278.96, samples=4 00:25:07.191 write: IOPS=5691, BW=88.9MiB/s (93.2MB/s)(150MiB/1685msec); 0 zone resets 00:25:07.191 slat (usec): min=28, max=203, avg=38.74, stdev=11.47 00:25:07.191 clat (usec): min=2154, max=19479, avg=9734.02, stdev=2521.25 00:25:07.191 lat (usec): min=2189, max=19531, avg=9772.75, stdev=2530.02 00:25:07.191 clat percentiles (usec): 00:25:07.191 | 1.00th=[ 5800], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7439], 00:25:07.191 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 9241], 60.00th=[10028], 00:25:07.191 | 70.00th=[10945], 80.00th=[12125], 90.00th=[13304], 95.00th=[14353], 00:25:07.191 | 99.00th=[15926], 99.50th=[16450], 99.90th=[18220], 99.95th=[18220], 00:25:07.191 | 99.99th=[19530] 00:25:07.191 bw ( KiB/s): min=48864, max=95520, per=84.25%, avg=76720.00, stdev=21910.92, samples=4 00:25:07.191 iops : min= 3054, max= 5970, avg=4795.00, stdev=1369.43, samples=4 00:25:07.191 lat (msec) : 4=1.71%, 10=68.22%, 20=30.08% 00:25:07.191 cpu : usr=85.39%, sys=14.11%, ctx=8, majf=0, minf=2292 00:25:07.191 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:07.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:07.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:07.191 issued rwts: total=18669,9590,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:07.191 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:07.191 00:25:07.191 Run status group 0 (all jobs): 00:25:07.191 READ: bw=145MiB/s (152MB/s), 145MiB/s-145MiB/s (152MB/s-152MB/s), io=292MiB (306MB), run=2006-2006msec 00:25:07.191 WRITE: bw=88.9MiB/s (93.2MB/s), 88.9MiB/s-88.9MiB/s (93.2MB/s-93.2MB/s), io=150MiB (157MB), run=1685-1685msec 00:25:07.191 ----------------------------------------------------- 00:25:07.191 Suppressions used: 00:25:07.191 count bytes template 00:25:07.191 1 57 /usr/src/fio/parse.c 00:25:07.191 115 11040 /usr/src/fio/iolog.c 00:25:07.191 1 8 libtcmalloc_minimal.so 00:25:07.191 ----------------------------------------------------- 00:25:07.191 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@84 -- # nvmftestfini 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:07.191 rmmod nvme_tcp 00:25:07.191 rmmod nvme_fabrics 00:25:07.191 rmmod nvme_keyring 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:07.191 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:07.192 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:07.192 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2106018 ']' 00:25:07.192 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2106018 00:25:07.192 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@947 -- # '[' -z 2106018 ']' 00:25:07.192 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@951 -- # kill -0 2106018 00:25:07.192 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # uname 00:25:07.192 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:07.192 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2106018 00:25:07.192 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:07.192 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:07.192 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2106018' 00:25:07.192 killing process with pid 2106018 00:25:07.192 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # kill 2106018 00:25:07.192 [2024-05-15 00:40:33.266506] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:07.192 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@971 -- # wait 2106018 00:25:07.755 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:07.755 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:07.755 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:07.755 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:07.755 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:07.755 00:40:33 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.755 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.756 00:40:33 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.281 00:40:35 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:10.281 00:25:10.281 real 0m15.476s 00:25:10.281 user 1m2.217s 00:25:10.281 sys 0m5.942s 00:25:10.281 00:40:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:10.281 00:40:35 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.281 ************************************ 00:25:10.281 END TEST nvmf_fio_host 00:25:10.281 ************************************ 00:25:10.281 00:40:35 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:10.281 00:40:35 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:10.281 00:40:35 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:10.281 00:40:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.281 ************************************ 00:25:10.281 START TEST nvmf_failover 00:25:10.281 ************************************ 00:25:10.281 00:40:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:10.281 * Looking for test storage... 00:25:10.281 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:25:10.281 00:40:35 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.281 00:40:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:10.281 00:40:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.281 00:40:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.281 00:40:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.281 00:40:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.281 00:40:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.281 00:40:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.281 00:40:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.281 00:40:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.281 00:40:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.281 00:40:35 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.281 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:25:10.281 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:25:10.281 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.281 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.281 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:10.281 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.281 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:25:10.281 00:40:36 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.281 00:40:36 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.281 00:40:36 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.281 00:40:36 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:10.282 00:40:36 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:15.541 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:15.541 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:15.541 Found net devices under 0000:27:00.0: cvl_0_0 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:15.541 Found net devices under 0000:27:00.1: cvl_0_1 00:25:15.541 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:15.542 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.542 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.673 ms 00:25:15.542 00:25:15.542 --- 10.0.0.2 ping statistics --- 00:25:15.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.542 rtt min/avg/max/mdev = 0.673/0.673/0.673/0.000 ms 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.542 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.542 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.283 ms 00:25:15.542 00:25:15.542 --- 10.0.0.1 ping statistics --- 00:25:15.542 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.542 rtt min/avg/max/mdev = 0.283/0.283/0.283/0.000 ms 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2111681 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2111681 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2111681 ']' 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:15.542 00:40:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:15.542 [2024-05-15 00:40:41.349260] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:15.542 [2024-05-15 00:40:41.349372] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.542 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.542 [2024-05-15 00:40:41.501259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:15.542 [2024-05-15 00:40:41.678036] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.542 [2024-05-15 00:40:41.678107] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.542 [2024-05-15 00:40:41.678127] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.542 [2024-05-15 00:40:41.678145] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.542 [2024-05-15 00:40:41.678159] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.542 [2024-05-15 00:40:41.678384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.542 [2024-05-15 00:40:41.678513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.542 [2024-05-15 00:40:41.678530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.106 00:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:16.106 00:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:25:16.106 00:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:16.106 00:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:16.106 00:40:42 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:16.106 00:40:42 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:16.106 00:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:16.106 [2024-05-15 00:40:42.212276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:16.106 00:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:16.364 Malloc0 00:25:16.364 00:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:16.623 00:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:16.623 00:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:16.883 [2024-05-15 00:40:42.843432] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:16.883 [2024-05-15 00:40:42.843871] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:16.883 00:40:42 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:16.883 [2024-05-15 00:40:42.995834] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:16.883 00:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:17.140 [2024-05-15 00:40:43.152026] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:17.140 00:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2112013 00:25:17.140 00:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:17.140 00:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:17.140 00:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2112013 /var/tmp/bdevperf.sock 00:25:17.140 00:40:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2112013 ']' 00:25:17.140 00:40:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:17.140 00:40:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:17.140 00:40:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:17.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:17.140 00:40:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:17.140 00:40:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:18.072 00:40:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:18.072 00:40:43 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:25:18.072 00:40:43 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.328 NVMe0n1 00:25:18.328 00:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:18.587 00:25:18.587 00:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2112319 00:25:18.587 00:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:18.587 00:40:44 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:19.521 00:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:19.779 [2024-05-15 00:40:45.769921] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.769984] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.769993] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770000] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770008] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770015] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770022] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770029] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770036] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770043] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770050] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770057] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770064] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770070] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770077] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770084] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770092] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770099] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770106] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770113] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770120] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770127] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770134] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770141] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770154] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770161] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770169] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770176] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770183] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770190] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770197] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770204] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770212] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770218] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770225] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770232] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 [2024-05-15 00:40:45.770239] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:25:19.779 00:40:45 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:23.061 00:40:48 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:23.061 00:25:23.061 00:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:23.061 [2024-05-15 00:40:49.208146] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208205] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208215] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208223] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208230] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208238] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208246] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208253] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208261] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208275] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208282] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208289] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208296] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208303] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208310] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.061 [2024-05-15 00:40:49.208316] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208323] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208330] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208337] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208344] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208352] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208360] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208368] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208375] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208382] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208389] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208396] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208403] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208410] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208417] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208424] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208432] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208439] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208447] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208454] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208461] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208469] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208477] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208484] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208491] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208498] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208506] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208515] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208522] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.062 [2024-05-15 00:40:49.208529] tcp.c:1598:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:25:23.320 00:40:49 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:26.603 00:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.603 [2024-05-15 00:40:52.360681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.603 00:40:52 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:27.537 00:40:53 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:27.537 00:40:53 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2112319 00:25:34.097 0 00:25:34.097 00:40:59 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2112013 00:25:34.097 00:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2112013 ']' 00:25:34.097 00:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2112013 00:25:34.097 00:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:25:34.097 00:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:34.097 00:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2112013 00:25:34.097 00:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:34.097 00:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:34.097 00:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2112013' 00:25:34.097 killing process with pid 2112013 00:25:34.097 00:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2112013 00:25:34.097 00:40:59 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2112013 00:25:34.097 00:41:00 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:34.097 [2024-05-15 00:40:43.255654] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:34.097 [2024-05-15 00:40:43.255787] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112013 ] 00:25:34.097 EAL: No free 2048 kB hugepages reported on node 1 00:25:34.097 [2024-05-15 00:40:43.373186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:34.097 [2024-05-15 00:40:43.469117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:34.097 Running I/O for 15 seconds... 00:25:34.097 [2024-05-15 00:40:45.771133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:95272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:95280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:95288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:95312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:95328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:95336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:95344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:95368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:95376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:95384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:95400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:95408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:95424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:95448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:95456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:95464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:95472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:95480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.097 [2024-05-15 00:40:45.771731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:95488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.097 [2024-05-15 00:40:45.771739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:95496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.098 [2024-05-15 00:40:45.771757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.098 [2024-05-15 00:40:45.771773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.098 [2024-05-15 00:40:45.771791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:95520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.098 [2024-05-15 00:40:45.771808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:95528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.098 [2024-05-15 00:40:45.771825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:95536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.098 [2024-05-15 00:40:45.771842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:95544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.098 [2024-05-15 00:40:45.771862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:95552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.098 [2024-05-15 00:40:45.771878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:95560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.098 [2024-05-15 00:40:45.771895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.771914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.771932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.771949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.771966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.771983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.771992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:95752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:95768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:95808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.098 [2024-05-15 00:40:45.772409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.098 [2024-05-15 00:40:45.772421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:95816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:95912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:95920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:95960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.099 [2024-05-15 00:40:45.772751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.099 [2024-05-15 00:40:45.772798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95968 len:8 PRP1 0x0 PRP2 0x0 00:25:34.099 [2024-05-15 00:40:45.772808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.099 [2024-05-15 00:40:45.772828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.099 [2024-05-15 00:40:45.772837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95976 len:8 PRP1 0x0 PRP2 0x0 00:25:34.099 [2024-05-15 00:40:45.772846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772855] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.099 [2024-05-15 00:40:45.772862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.099 [2024-05-15 00:40:45.772869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95984 len:8 PRP1 0x0 PRP2 0x0 00:25:34.099 [2024-05-15 00:40:45.772876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.099 [2024-05-15 00:40:45.772890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.099 [2024-05-15 00:40:45.772898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95992 len:8 PRP1 0x0 PRP2 0x0 00:25:34.099 [2024-05-15 00:40:45.772905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772913] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.099 [2024-05-15 00:40:45.772924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.099 [2024-05-15 00:40:45.772932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96000 len:8 PRP1 0x0 PRP2 0x0 00:25:34.099 [2024-05-15 00:40:45.772940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772948] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.099 [2024-05-15 00:40:45.772954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.099 [2024-05-15 00:40:45.772961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96008 len:8 PRP1 0x0 PRP2 0x0 00:25:34.099 [2024-05-15 00:40:45.772969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.772978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.099 [2024-05-15 00:40:45.772984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.099 [2024-05-15 00:40:45.772991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96016 len:8 PRP1 0x0 PRP2 0x0 00:25:34.099 [2024-05-15 00:40:45.772998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.773006] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.099 [2024-05-15 00:40:45.773012] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.099 [2024-05-15 00:40:45.773019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96024 len:8 PRP1 0x0 PRP2 0x0 00:25:34.099 [2024-05-15 00:40:45.773028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.773036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.099 [2024-05-15 00:40:45.773042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.099 [2024-05-15 00:40:45.773049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96032 len:8 PRP1 0x0 PRP2 0x0 00:25:34.099 [2024-05-15 00:40:45.773056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.773063] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.099 [2024-05-15 00:40:45.773075] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.099 [2024-05-15 00:40:45.773081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96040 len:8 PRP1 0x0 PRP2 0x0 00:25:34.099 [2024-05-15 00:40:45.773090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.773097] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.099 [2024-05-15 00:40:45.773102] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.099 [2024-05-15 00:40:45.773109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96048 len:8 PRP1 0x0 PRP2 0x0 00:25:34.099 [2024-05-15 00:40:45.773118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.773126] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.099 [2024-05-15 00:40:45.773131] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.099 [2024-05-15 00:40:45.773138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96056 len:8 PRP1 0x0 PRP2 0x0 00:25:34.099 [2024-05-15 00:40:45.773145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.099 [2024-05-15 00:40:45.773153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.099 [2024-05-15 00:40:45.773160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773210] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773216] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773245] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96096 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773361] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96120 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96128 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773426] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96136 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773448] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96144 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96152 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773510] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96160 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96168 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773563] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773569] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96176 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96184 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773618] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96192 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96200 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773674] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773680] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96208 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96216 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773736] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96224 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.773757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.773764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.773770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.100 [2024-05-15 00:40:45.773778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96232 len:8 PRP1 0x0 PRP2 0x0 00:25:34.100 [2024-05-15 00:40:45.777970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.100 [2024-05-15 00:40:45.778013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.100 [2024-05-15 00:40:45.778023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.101 [2024-05-15 00:40:45.778035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96240 len:8 PRP1 0x0 PRP2 0x0 00:25:34.101 [2024-05-15 00:40:45.778048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:45.778058] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.101 [2024-05-15 00:40:45.778064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.101 [2024-05-15 00:40:45.778071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96248 len:8 PRP1 0x0 PRP2 0x0 00:25:34.101 [2024-05-15 00:40:45.778080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:45.778088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.101 [2024-05-15 00:40:45.778097] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.101 [2024-05-15 00:40:45.778105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96256 len:8 PRP1 0x0 PRP2 0x0 00:25:34.101 [2024-05-15 00:40:45.778114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:45.778123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.101 [2024-05-15 00:40:45.778132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.101 [2024-05-15 00:40:45.778142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96264 len:8 PRP1 0x0 PRP2 0x0 00:25:34.101 [2024-05-15 00:40:45.778154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:45.778165] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.101 [2024-05-15 00:40:45.778175] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.101 [2024-05-15 00:40:45.778192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96272 len:8 PRP1 0x0 PRP2 0x0 00:25:34.101 [2024-05-15 00:40:45.778204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:45.778216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.101 [2024-05-15 00:40:45.778225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.101 [2024-05-15 00:40:45.778235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95568 len:8 PRP1 0x0 PRP2 0x0 00:25:34.101 [2024-05-15 00:40:45.778248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:45.778258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.101 [2024-05-15 00:40:45.778267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.101 [2024-05-15 00:40:45.778276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:95576 len:8 PRP1 0x0 PRP2 0x0 00:25:34.101 [2024-05-15 00:40:45.778288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:45.778449] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003a1680 was disconnected and freed. reset controller. 00:25:34.101 [2024-05-15 00:40:45.778485] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:34.101 [2024-05-15 00:40:45.778540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.101 [2024-05-15 00:40:45.778567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:45.778588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.101 [2024-05-15 00:40:45.778605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:45.778623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.101 [2024-05-15 00:40:45.778635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:45.778650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.101 [2024-05-15 00:40:45.778664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:45.778678] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.101 [2024-05-15 00:40:45.778757] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0c80 (9): Bad file descriptor 00:25:34.101 [2024-05-15 00:40:45.782287] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.101 [2024-05-15 00:40:45.853294] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:34.101 [2024-05-15 00:40:49.209160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.101 [2024-05-15 00:40:49.209218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:49.209243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.101 [2024-05-15 00:40:49.209253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:49.209270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:38064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.101 [2024-05-15 00:40:49.209285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:49.209297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.101 [2024-05-15 00:40:49.209314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:49.209324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.101 [2024-05-15 00:40:49.209333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:49.209343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.101 [2024-05-15 00:40:49.209351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:49.209361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.101 [2024-05-15 00:40:49.209369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:49.209381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.101 [2024-05-15 00:40:49.209389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:49.209399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.101 [2024-05-15 00:40:49.209408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:49.209418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.101 [2024-05-15 00:40:49.209426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:49.209437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:38128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.101 [2024-05-15 00:40:49.209445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.101 [2024-05-15 00:40:49.209455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.101 [2024-05-15 00:40:49.209463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:38304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:38344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.209984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.209994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.210002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.210013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.210022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.210031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.210041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.210051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.210059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.210069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.210077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.210088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.210096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.210106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.210114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.210125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.102 [2024-05-15 00:40:49.210133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.210145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.102 [2024-05-15 00:40:49.210154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.210164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.102 [2024-05-15 00:40:49.210173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.210184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.102 [2024-05-15 00:40:49.210193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.210203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.102 [2024-05-15 00:40:49.210212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.102 [2024-05-15 00:40:49.210222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.210989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.103 [2024-05-15 00:40:49.210999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.103 [2024-05-15 00:40:49.211011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.104 [2024-05-15 00:40:49.211030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.104 [2024-05-15 00:40:49.211049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.104 [2024-05-15 00:40:49.211066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.104 [2024-05-15 00:40:49.211086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211121] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38824 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38832 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211227] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38840 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211260] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38848 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38856 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38864 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38872 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38880 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211432] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38888 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38896 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211488] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211495] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38904 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211520] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38912 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211556] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38920 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38928 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38936 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211653] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38944 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211691] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38952 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38960 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211753] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38968 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38976 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.104 [2024-05-15 00:40:49.211825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.104 [2024-05-15 00:40:49.211833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38984 len:8 PRP1 0x0 PRP2 0x0 00:25:34.104 [2024-05-15 00:40:49.211843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.104 [2024-05-15 00:40:49.211852] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.105 [2024-05-15 00:40:49.211858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.105 [2024-05-15 00:40:49.211865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38992 len:8 PRP1 0x0 PRP2 0x0 00:25:34.105 [2024-05-15 00:40:49.211875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.211883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.105 [2024-05-15 00:40:49.211891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.105 [2024-05-15 00:40:49.211899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39000 len:8 PRP1 0x0 PRP2 0x0 00:25:34.105 [2024-05-15 00:40:49.211908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.211916] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.105 [2024-05-15 00:40:49.211923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.105 [2024-05-15 00:40:49.211930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39008 len:8 PRP1 0x0 PRP2 0x0 00:25:34.105 [2024-05-15 00:40:49.211939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.211946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.105 [2024-05-15 00:40:49.211954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.105 [2024-05-15 00:40:49.211962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39016 len:8 PRP1 0x0 PRP2 0x0 00:25:34.105 [2024-05-15 00:40:49.211970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.211979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.105 [2024-05-15 00:40:49.211985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.105 [2024-05-15 00:40:49.211992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39024 len:8 PRP1 0x0 PRP2 0x0 00:25:34.105 [2024-05-15 00:40:49.212002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.212009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.105 [2024-05-15 00:40:49.212015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.105 [2024-05-15 00:40:49.212022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39032 len:8 PRP1 0x0 PRP2 0x0 00:25:34.105 [2024-05-15 00:40:49.212031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.212038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.105 [2024-05-15 00:40:49.212045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.105 [2024-05-15 00:40:49.212052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39040 len:8 PRP1 0x0 PRP2 0x0 00:25:34.105 [2024-05-15 00:40:49.212060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.212068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.105 [2024-05-15 00:40:49.212076] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.105 [2024-05-15 00:40:49.212084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39048 len:8 PRP1 0x0 PRP2 0x0 00:25:34.105 [2024-05-15 00:40:49.212093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.212100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.105 [2024-05-15 00:40:49.212107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.105 [2024-05-15 00:40:49.212115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39056 len:8 PRP1 0x0 PRP2 0x0 00:25:34.105 [2024-05-15 00:40:49.212124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.212133] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.105 [2024-05-15 00:40:49.212138] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.105 [2024-05-15 00:40:49.212147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39064 len:8 PRP1 0x0 PRP2 0x0 00:25:34.105 [2024-05-15 00:40:49.212157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.212164] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.105 [2024-05-15 00:40:49.212171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.105 [2024-05-15 00:40:49.212178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38424 len:8 PRP1 0x0 PRP2 0x0 00:25:34.105 [2024-05-15 00:40:49.212186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.212195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.105 [2024-05-15 00:40:49.212201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.105 [2024-05-15 00:40:49.212208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38432 len:8 PRP1 0x0 PRP2 0x0 00:25:34.105 [2024-05-15 00:40:49.212218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.212344] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003a1900 was disconnected and freed. reset controller. 00:25:34.105 [2024-05-15 00:40:49.212364] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:34.105 [2024-05-15 00:40:49.212412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.105 [2024-05-15 00:40:49.212432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.212444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.105 [2024-05-15 00:40:49.212457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.212467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.105 [2024-05-15 00:40:49.212478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.212489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.105 [2024-05-15 00:40:49.212500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:49.212514] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.105 [2024-05-15 00:40:49.212574] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0c80 (9): Bad file descriptor 00:25:34.105 [2024-05-15 00:40:49.215215] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.105 [2024-05-15 00:40:49.244183] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:34.105 [2024-05-15 00:40:53.526221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.105 [2024-05-15 00:40:53.526294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:53.526328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.105 [2024-05-15 00:40:53.526340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:53.526354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.105 [2024-05-15 00:40:53.526365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:53.526377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.105 [2024-05-15 00:40:53.526387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:53.526397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.105 [2024-05-15 00:40:53.526405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:53.526414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.105 [2024-05-15 00:40:53.526423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:53.526433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:63464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.105 [2024-05-15 00:40:53.526441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:53.526451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.105 [2024-05-15 00:40:53.526459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:53.526470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.105 [2024-05-15 00:40:53.526478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:53.526487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.105 [2024-05-15 00:40:53.526495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:53.526504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.105 [2024-05-15 00:40:53.526512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:53.526527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.105 [2024-05-15 00:40:53.526535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:53.526548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.105 [2024-05-15 00:40:53.526565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.105 [2024-05-15 00:40:53.526577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:63528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:63552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.106 [2024-05-15 00:40:53.526760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:63608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:63624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:63632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.526981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.526991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.527000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.527010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.527019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.527029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.527037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.527047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.527060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.527071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.527079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.527089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.527097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.527107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.527116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.527126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.527133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.527144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.527153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.527163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.527171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.527181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.527188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.527198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.106 [2024-05-15 00:40:53.527205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.106 [2024-05-15 00:40:53.527215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.527987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.527995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.528005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.528013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.528023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.528031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.107 [2024-05-15 00:40:53.528042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.107 [2024-05-15 00:40:53.528051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.108 [2024-05-15 00:40:53.528397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:34.108 [2024-05-15 00:40:53.528418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.108 [2024-05-15 00:40:53.528863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.108 [2024-05-15 00:40:53.528872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.109 [2024-05-15 00:40:53.528885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:34.109 [2024-05-15 00:40:53.528892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.109 [2024-05-15 00:40:53.528902] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a1e00 is same with the state(5) to be set 00:25:34.109 [2024-05-15 00:40:53.528917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:34.109 [2024-05-15 00:40:53.528926] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:34.109 [2024-05-15 00:40:53.528936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64408 len:8 PRP1 0x0 PRP2 0x0 00:25:34.109 [2024-05-15 00:40:53.528946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.109 [2024-05-15 00:40:53.529072] bdev_nvme.c:1602:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x6150003a1e00 was disconnected and freed. reset controller. 00:25:34.109 [2024-05-15 00:40:53.529089] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:34.109 [2024-05-15 00:40:53.529124] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.109 [2024-05-15 00:40:53.529136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.109 [2024-05-15 00:40:53.529146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.109 [2024-05-15 00:40:53.529154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.109 [2024-05-15 00:40:53.529163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.109 [2024-05-15 00:40:53.529172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.109 [2024-05-15 00:40:53.529181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:34.109 [2024-05-15 00:40:53.529190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:34.109 [2024-05-15 00:40:53.529200] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:34.109 [2024-05-15 00:40:53.531800] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:34.109 [2024-05-15 00:40:53.531835] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0c80 (9): Bad file descriptor 00:25:34.109 [2024-05-15 00:40:53.602098] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:34.109 00:25:34.109 Latency(us) 00:25:34.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.109 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:34.109 Verification LBA range: start 0x0 length 0x4000 00:25:34.109 NVMe0n1 : 15.01 11479.79 44.84 555.77 0.00 10614.18 396.67 15590.67 00:25:34.109 =================================================================================================================== 00:25:34.109 Total : 11479.79 44.84 555.77 0.00 10614.18 396.67 15590.67 00:25:34.109 Received shutdown signal, test time was about 15.000000 seconds 00:25:34.109 00:25:34.109 Latency(us) 00:25:34.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:34.109 =================================================================================================================== 00:25:34.109 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:34.109 00:41:00 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:34.109 00:41:00 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:25:34.109 00:41:00 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:25:34.109 00:41:00 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2115331 00:25:34.109 00:41:00 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2115331 /var/tmp/bdevperf.sock 00:25:34.109 00:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@828 -- # '[' -z 2115331 ']' 00:25:34.109 00:41:00 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:34.109 00:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:34.109 00:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:34.109 00:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:34.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:34.109 00:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:34.109 00:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:35.046 00:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:35.046 00:41:00 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@861 -- # return 0 00:25:35.046 00:41:00 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:35.046 [2024-05-15 00:41:01.103003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:35.046 00:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:35.304 [2024-05-15 00:41:01.258978] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:35.304 00:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:35.561 NVMe0n1 00:25:35.561 00:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:35.818 00:25:35.818 00:41:01 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:36.075 00:25:36.075 00:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:36.075 00:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:25:36.334 00:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:36.594 00:41:02 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:25:39.957 00:41:05 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:39.957 00:41:05 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:25:39.957 00:41:05 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2116526 00:25:39.957 00:41:05 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2116526 00:25:39.957 00:41:05 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:40.897 0 00:25:40.897 00:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:40.897 [2024-05-15 00:41:00.245447] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:40.897 [2024-05-15 00:41:00.245571] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115331 ] 00:25:40.897 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.897 [2024-05-15 00:41:00.357738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:40.897 [2024-05-15 00:41:00.449750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.897 [2024-05-15 00:41:02.506954] bdev_nvme.c:1858:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:40.897 [2024-05-15 00:41:02.507023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.897 [2024-05-15 00:41:02.507037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.897 [2024-05-15 00:41:02.507050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.897 [2024-05-15 00:41:02.507062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.897 [2024-05-15 00:41:02.507072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.897 [2024-05-15 00:41:02.507080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.897 [2024-05-15 00:41:02.507089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:40.897 [2024-05-15 00:41:02.507097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:40.897 [2024-05-15 00:41:02.507105] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:40.897 [2024-05-15 00:41:02.507151] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:40.897 [2024-05-15 00:41:02.507175] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0c80 (9): Bad file descriptor 00:25:40.897 [2024-05-15 00:41:02.600901] bdev_nvme.c:2055:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:40.897 Running I/O for 1 seconds... 00:25:40.897 00:25:40.897 Latency(us) 00:25:40.897 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:40.897 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:40.897 Verification LBA range: start 0x0 length 0x4000 00:25:40.897 NVMe0n1 : 1.00 11619.46 45.39 0.00 0.00 10975.37 2431.73 12210.39 00:25:40.897 =================================================================================================================== 00:25:40.897 Total : 11619.46 45.39 0.00 0.00 10975.37 2431.73 12210.39 00:25:40.897 00:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:40.897 00:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:25:40.897 00:41:06 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:41.156 00:41:07 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:41.156 00:41:07 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:25:41.156 00:41:07 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:41.415 00:41:07 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:44.698 00:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:44.698 00:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:44.698 00:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2115331 00:25:44.698 00:41:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2115331 ']' 00:25:44.698 00:41:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2115331 00:25:44.698 00:41:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:25:44.698 00:41:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:44.698 00:41:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2115331 00:25:44.698 00:41:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:25:44.698 00:41:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:25:44.698 00:41:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2115331' 00:25:44.698 killing process with pid 2115331 00:25:44.698 00:41:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2115331 00:25:44.698 00:41:10 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2115331 00:25:44.959 00:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:44.959 00:41:10 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.217 00:41:11 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:45.217 00:41:11 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:45.217 00:41:11 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:45.217 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:45.217 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:45.218 rmmod nvme_tcp 00:25:45.218 rmmod nvme_fabrics 00:25:45.218 rmmod nvme_keyring 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2111681 ']' 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2111681 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@947 -- # '[' -z 2111681 ']' 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@951 -- # kill -0 2111681 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # uname 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2111681 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2111681' 00:25:45.218 killing process with pid 2111681 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # kill 2111681 00:25:45.218 [2024-05-15 00:41:11.296247] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:25:45.218 00:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@971 -- # wait 2111681 00:25:45.782 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:45.782 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:45.782 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:45.782 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:45.783 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:45.783 00:41:11 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:45.783 00:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:45.783 00:41:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.312 00:41:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:48.312 00:25:48.312 real 0m37.945s 00:25:48.312 user 2m1.480s 00:25:48.313 sys 0m6.830s 00:25:48.313 00:41:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # xtrace_disable 00:25:48.313 00:41:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:48.313 ************************************ 00:25:48.313 END TEST nvmf_failover 00:25:48.313 ************************************ 00:25:48.313 00:41:13 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_host_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:48.313 00:41:13 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:25:48.313 00:41:13 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:25:48.313 00:41:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:48.313 ************************************ 00:25:48.313 START TEST nvmf_host_discovery 00:25:48.313 ************************************ 00:25:48.313 00:41:13 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:48.313 * Looking for test storage... 00:25:48.313 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:48.313 00:41:14 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:25:53.579 Found 0000:27:00.0 (0x8086 - 0x159b) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:25:53.579 Found 0000:27:00.1 (0x8086 - 0x159b) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:25:53.579 Found net devices under 0000:27:00.0: cvl_0_0 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:25:53.579 Found net devices under 0000:27:00.1: cvl_0_1 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:53.579 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:53.580 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.580 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:25:53.580 00:25:53.580 --- 10.0.0.2 ping statistics --- 00:25:53.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.580 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:53.580 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.580 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:25:53.580 00:25:53.580 --- 10.0.0.1 ping statistics --- 00:25:53.580 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.580 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@721 -- # xtrace_disable 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2121592 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2121592 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 2121592 ']' 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:53.580 00:41:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:53.837 [2024-05-15 00:41:19.784754] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:53.837 [2024-05-15 00:41:19.784853] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.837 EAL: No free 2048 kB hugepages reported on node 1 00:25:53.837 [2024-05-15 00:41:19.926139] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.096 [2024-05-15 00:41:20.069169] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:54.096 [2024-05-15 00:41:20.069223] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:54.096 [2024-05-15 00:41:20.069239] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:54.096 [2024-05-15 00:41:20.069254] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:54.096 [2024-05-15 00:41:20.069266] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:54.096 [2024-05-15 00:41:20.069312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:54.354 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:54.354 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:25:54.354 00:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:54.355 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@727 -- # xtrace_disable 00:25:54.355 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.614 [2024-05-15 00:41:20.531561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.614 [2024-05-15 00:41:20.543484] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:25:54.614 [2024-05-15 00:41:20.543893] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.614 null0 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.614 null1 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2121647 00:25:54.614 00:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2121647 /tmp/host.sock 00:25:54.615 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@828 -- # '[' -z 2121647 ']' 00:25:54.615 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:25:54.615 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local max_retries=100 00:25:54.615 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:54.615 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:54.615 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@837 -- # xtrace_disable 00:25:54.615 00:41:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:54.615 00:41:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:54.615 [2024-05-15 00:41:20.656745] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:25:54.615 [2024-05-15 00:41:20.656858] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121647 ] 00:25:54.615 EAL: No free 2048 kB hugepages reported on node 1 00:25:54.615 [2024-05-15 00:41:20.776253] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:54.873 [2024-05-15 00:41:20.876401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@861 -- # return 0 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.438 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:55.439 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.696 [2024-05-15 00:41:21.644187] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == \n\v\m\e\0 ]] 00:25:55.696 00:41:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:25:56.262 [2024-05-15 00:41:22.419773] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:56.263 [2024-05-15 00:41:22.419804] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:56.263 [2024-05-15 00:41:22.419836] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:56.521 [2024-05-15 00:41:22.507886] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:56.521 [2024-05-15 00:41:22.568928] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:56.521 [2024-05-15 00:41:22.568957] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:56.780 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0 ]] 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:25:57.038 00:41:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:57.038 00:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:57.038 00:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:57.038 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.038 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.038 00:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:25:57.296 00:41:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:25:58.227 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:58.227 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:58.227 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:25:58.227 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.228 [2024-05-15 00:41:24.357104] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:58.228 [2024-05-15 00:41:24.358259] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:58.228 [2024-05-15 00:41:24.358295] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.228 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.487 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.487 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:58.487 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.487 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:58.487 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:58.487 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:58.487 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:58.487 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:25:58.487 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:58.487 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:58.487 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:58.488 [2024-05-15 00:41:24.487376] bdev_nvme.c:6891:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:58.488 00:41:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@917 -- # sleep 1 00:25:58.747 [2024-05-15 00:41:24.753995] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:58.747 [2024-05-15 00:41:24.754023] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:58.747 [2024-05-15 00:41:24.754033] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.680 [2024-05-15 00:41:25.578366] bdev_nvme.c:6949:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:59.680 [2024-05-15 00:41:25.578398] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:59.680 [2024-05-15 00:41:25.581284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.680 [2024-05-15 00:41:25.581310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.680 [2024-05-15 00:41:25.581322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.680 [2024-05-15 00:41:25.581331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.680 [2024-05-15 00:41:25.581344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.680 [2024-05-15 00:41:25.581353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.680 [2024-05-15 00:41:25.581362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:59.680 [2024-05-15 00:41:25.581371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:59.680 [2024-05-15 00:41:25.581379] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0f00 is same with the state(5) to be set 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.680 [2024-05-15 00:41:25.591272] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0f00 (9): Bad file descriptor 00:25:59.680 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.680 [2024-05-15 00:41:25.601286] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.680 [2024-05-15 00:41:25.601582] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.680 [2024-05-15 00:41:25.601757] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.680 [2024-05-15 00:41:25.601770] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0f00 with addr=10.0.0.2, port=4420 00:25:59.680 [2024-05-15 00:41:25.601782] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0f00 is same with the state(5) to be set 00:25:59.680 [2024-05-15 00:41:25.601797] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0f00 (9): Bad file descriptor 00:25:59.680 [2024-05-15 00:41:25.601819] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.680 [2024-05-15 00:41:25.601828] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.680 [2024-05-15 00:41:25.601839] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.680 [2024-05-15 00:41:25.601856] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.680 [2024-05-15 00:41:25.611332] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.680 [2024-05-15 00:41:25.611665] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.680 [2024-05-15 00:41:25.611861] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.680 [2024-05-15 00:41:25.611872] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0f00 with addr=10.0.0.2, port=4420 00:25:59.680 [2024-05-15 00:41:25.611885] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0f00 is same with the state(5) to be set 00:25:59.680 [2024-05-15 00:41:25.611898] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0f00 (9): Bad file descriptor 00:25:59.680 [2024-05-15 00:41:25.611917] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.680 [2024-05-15 00:41:25.611926] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.680 [2024-05-15 00:41:25.611934] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.680 [2024-05-15 00:41:25.611946] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.680 [2024-05-15 00:41:25.621375] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.680 [2024-05-15 00:41:25.621591] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.680 [2024-05-15 00:41:25.621800] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.680 [2024-05-15 00:41:25.621812] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0f00 with addr=10.0.0.2, port=4420 00:25:59.680 [2024-05-15 00:41:25.621822] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0f00 is same with the state(5) to be set 00:25:59.681 [2024-05-15 00:41:25.621836] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0f00 (9): Bad file descriptor 00:25:59.681 [2024-05-15 00:41:25.621849] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.681 [2024-05-15 00:41:25.621858] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.681 [2024-05-15 00:41:25.621867] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.681 [2024-05-15 00:41:25.621880] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:59.681 [2024-05-15 00:41:25.631413] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.681 [2024-05-15 00:41:25.631650] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.681 [2024-05-15 00:41:25.631979] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.681 [2024-05-15 00:41:25.631989] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0f00 with addr=10.0.0.2, port=4420 00:25:59.681 [2024-05-15 00:41:25.632003] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0f00 is same with the state(5) to be set 00:25:59.681 [2024-05-15 00:41:25.632016] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0f00 (9): Bad file descriptor 00:25:59.681 [2024-05-15 00:41:25.632034] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.681 [2024-05-15 00:41:25.632043] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.681 [2024-05-15 00:41:25.632052] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.681 [2024-05-15 00:41:25.632065] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.681 [2024-05-15 00:41:25.641452] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.681 [2024-05-15 00:41:25.641777] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.681 [2024-05-15 00:41:25.642029] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.681 [2024-05-15 00:41:25.642041] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0f00 with addr=10.0.0.2, port=4420 00:25:59.681 [2024-05-15 00:41:25.642051] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0f00 is same with the state(5) to be set 00:25:59.681 [2024-05-15 00:41:25.642064] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0f00 (9): Bad file descriptor 00:25:59.681 [2024-05-15 00:41:25.642083] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.681 [2024-05-15 00:41:25.642092] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.681 [2024-05-15 00:41:25.642101] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.681 [2024-05-15 00:41:25.642114] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.681 [2024-05-15 00:41:25.651492] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.681 [2024-05-15 00:41:25.651634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.681 [2024-05-15 00:41:25.651959] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.681 [2024-05-15 00:41:25.651971] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0f00 with addr=10.0.0.2, port=4420 00:25:59.681 [2024-05-15 00:41:25.651981] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0f00 is same with the state(5) to be set 00:25:59.681 [2024-05-15 00:41:25.651993] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0f00 (9): Bad file descriptor 00:25:59.681 [2024-05-15 00:41:25.652007] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.681 [2024-05-15 00:41:25.652014] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.681 [2024-05-15 00:41:25.652021] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.681 [2024-05-15 00:41:25.652034] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.681 [2024-05-15 00:41:25.661531] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:59.681 [2024-05-15 00:41:25.661704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.681 [2024-05-15 00:41:25.661834] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:59.681 [2024-05-15 00:41:25.661849] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a0f00 with addr=10.0.0.2, port=4420 00:25:59.681 [2024-05-15 00:41:25.661864] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a0f00 is same with the state(5) to be set 00:25:59.681 [2024-05-15 00:41:25.661879] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0f00 (9): Bad file descriptor 00:25:59.681 [2024-05-15 00:41:25.661894] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:59.681 [2024-05-15 00:41:25.661904] nvme_ctrlr.c:1750:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:59.681 [2024-05-15 00:41:25.661913] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:59.681 [2024-05-15 00:41:25.661930] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:59.681 [2024-05-15 00:41:25.666128] bdev_nvme.c:6754:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:59.681 [2024-05-15 00:41:25.666156] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_paths nvme0 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ 4421 == \4\4\2\1 ]] 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:59.681 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_subsystem_names 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_bdev_list 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # [[ '' == '' ]] 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@911 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local max=10 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # (( max-- )) 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:59.682 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # get_notification_count 00:25:59.939 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:59.939 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:59.939 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.939 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:59.939 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:25:59.939 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:59.939 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:59.939 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( notification_count == expected_count )) 00:25:59.939 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # return 0 00:25:59.939 00:41:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:59.939 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:25:59.939 00:41:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:00.877 [2024-05-15 00:41:26.928882] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:00.877 [2024-05-15 00:41:26.928907] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:00.877 [2024-05-15 00:41:26.928931] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:00.877 [2024-05-15 00:41:27.016986] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:01.135 [2024-05-15 00:41:27.083131] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:01.135 [2024-05-15 00:41:27.083172] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.135 request: 00:26:01.135 { 00:26:01.135 "name": "nvme", 00:26:01.135 "trtype": "tcp", 00:26:01.135 "traddr": "10.0.0.2", 00:26:01.135 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.135 "adrfam": "ipv4", 00:26:01.135 "trsvcid": "8009", 00:26:01.135 "wait_for_attach": true, 00:26:01.135 "method": "bdev_nvme_start_discovery", 00:26:01.135 "req_id": 1 00:26:01.135 } 00:26:01.135 Got JSON-RPC error response 00:26:01.135 response: 00:26:01.135 { 00:26:01.135 "code": -17, 00:26:01.135 "message": "File exists" 00:26:01.135 } 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.135 request: 00:26:01.135 { 00:26:01.135 "name": "nvme_second", 00:26:01.135 "trtype": "tcp", 00:26:01.135 "traddr": "10.0.0.2", 00:26:01.135 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:01.135 "adrfam": "ipv4", 00:26:01.135 "trsvcid": "8009", 00:26:01.135 "wait_for_attach": true, 00:26:01.135 "method": "bdev_nvme_start_discovery", 00:26:01.135 "req_id": 1 00:26:01.135 } 00:26:01.135 Got JSON-RPC error response 00:26:01.135 response: 00:26:01.135 { 00:26:01.135 "code": -17, 00:26:01.135 "message": "File exists" 00:26:01.135 } 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@649 -- # local es=0 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:26:01.135 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:01.136 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:01.136 00:41:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:02.510 [2024-05-15 00:41:28.292503] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.510 [2024-05-15 00:41:28.292737] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:02.510 [2024-05-15 00:41:28.292750] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a2080 with addr=10.0.0.2, port=8010 00:26:02.510 [2024-05-15 00:41:28.292781] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:02.510 [2024-05-15 00:41:28.292793] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:02.510 [2024-05-15 00:41:28.292802] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:03.443 [2024-05-15 00:41:29.292466] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.443 [2024-05-15 00:41:29.292581] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:03.443 [2024-05-15 00:41:29.292597] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a2300 with addr=10.0.0.2, port=8010 00:26:03.443 [2024-05-15 00:41:29.292623] nvme_tcp.c:2702:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:03.443 [2024-05-15 00:41:29.292632] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:03.443 [2024-05-15 00:41:29.292641] bdev_nvme.c:7029:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:04.377 [2024-05-15 00:41:30.292187] bdev_nvme.c:7010:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:04.377 request: 00:26:04.378 { 00:26:04.378 "name": "nvme_second", 00:26:04.378 "trtype": "tcp", 00:26:04.378 "traddr": "10.0.0.2", 00:26:04.378 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:04.378 "adrfam": "ipv4", 00:26:04.378 "trsvcid": "8010", 00:26:04.378 "attach_timeout_ms": 3000, 00:26:04.378 "method": "bdev_nvme_start_discovery", 00:26:04.378 "req_id": 1 00:26:04.378 } 00:26:04.378 Got JSON-RPC error response 00:26:04.378 response: 00:26:04.378 { 00:26:04.378 "code": -110, 00:26:04.378 "message": "Connection timed out" 00:26:04.378 } 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@652 -- # es=1 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2121647 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:04.378 rmmod nvme_tcp 00:26:04.378 rmmod nvme_fabrics 00:26:04.378 rmmod nvme_keyring 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2121592 ']' 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2121592 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@947 -- # '[' -z 2121592 ']' 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@951 -- # kill -0 2121592 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # uname 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2121592 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2121592' 00:26:04.378 killing process with pid 2121592 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # kill 2121592 00:26:04.378 [2024-05-15 00:41:30.455708] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:04.378 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@971 -- # wait 2121592 00:26:04.944 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:04.944 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:04.944 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:04.944 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:04.944 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:04.945 00:41:30 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:04.945 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:04.945 00:41:30 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:06.842 00:41:32 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:06.842 00:26:06.842 real 0m19.064s 00:26:06.842 user 0m23.968s 00:26:06.842 sys 0m5.676s 00:26:06.842 00:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:06.842 00:41:32 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:06.842 ************************************ 00:26:06.842 END TEST nvmf_host_discovery 00:26:06.842 ************************************ 00:26:07.100 00:41:33 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:07.100 00:41:33 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:07.100 00:41:33 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:07.100 00:41:33 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:07.100 ************************************ 00:26:07.100 START TEST nvmf_host_multipath_status 00:26:07.100 ************************************ 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:07.100 * Looking for test storage... 00:26:07.100 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.100 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/bpftrace.sh 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:07.101 00:41:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.423 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:12.424 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:12.424 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:12.424 Found net devices under 0000:27:00.0: cvl_0_0 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:12.424 Found net devices under 0000:27:00.1: cvl_0_1 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:12.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.314 ms 00:26:12.424 00:26:12.424 --- 10.0.0.2 ping statistics --- 00:26:12.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.424 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:26:12.424 00:26:12.424 --- 10.0.0.1 ping statistics --- 00:26:12.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.424 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2127606 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2127606 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 2127606 ']' 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:12.424 00:41:38 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:12.424 [2024-05-15 00:41:38.388537] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:26:12.424 [2024-05-15 00:41:38.388648] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.424 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.424 [2024-05-15 00:41:38.516434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:12.684 [2024-05-15 00:41:38.616670] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.684 [2024-05-15 00:41:38.616713] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.684 [2024-05-15 00:41:38.616724] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.684 [2024-05-15 00:41:38.616735] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.684 [2024-05-15 00:41:38.616744] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.684 [2024-05-15 00:41:38.616832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.684 [2024-05-15 00:41:38.616856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.946 00:41:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:12.946 00:41:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:26:12.946 00:41:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:12.946 00:41:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:12.946 00:41:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.203 00:41:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:13.203 00:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2127606 00:26:13.203 00:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:13.203 [2024-05-15 00:41:39.261069] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.203 00:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:13.461 Malloc0 00:26:13.461 00:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:13.461 00:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:13.718 00:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:13.718 [2024-05-15 00:41:39.844509] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:13.718 [2024-05-15 00:41:39.844792] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.718 00:41:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:13.976 [2024-05-15 00:41:40.000791] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:13.976 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2127990 00:26:13.976 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:13.976 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2127990 /var/tmp/bdevperf.sock 00:26:13.976 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@828 -- # '[' -z 2127990 ']' 00:26:13.976 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:13.976 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:13.976 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:13.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:13.976 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:13.976 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:13.976 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:14.917 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:14.917 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@861 -- # return 0 00:26:14.917 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:14.917 00:41:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:15.175 Nvme0n1 00:26:15.175 00:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:15.433 Nvme0n1 00:26:15.433 00:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:15.433 00:41:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:17.960 00:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:17.960 00:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:17.960 00:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:17.960 00:41:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:18.895 00:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:18.895 00:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:18.895 00:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.895 00:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:18.895 00:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:18.895 00:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:18.895 00:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:18.895 00:41:44 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:19.152 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:19.152 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:19.152 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.152 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:19.152 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.152 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:19.152 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.152 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:19.411 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.411 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:19.411 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.411 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:19.411 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.411 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:19.411 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:19.411 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:19.668 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:19.668 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:19.668 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:19.668 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:19.925 00:41:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:20.864 00:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:20.864 00:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:20.864 00:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:20.864 00:41:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:21.123 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:21.123 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:21.123 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.123 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:21.123 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.123 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:21.123 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.123 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:21.381 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.381 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:21.381 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:21.381 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.638 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.638 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:21.638 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:21.638 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.638 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.638 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:21.638 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:21.638 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:21.896 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:21.896 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:21.896 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:21.896 00:41:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:22.154 00:41:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:23.093 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:23.093 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:23.093 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.093 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:23.351 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.352 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:23.352 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:23.352 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.352 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:23.352 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:23.352 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:23.352 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.609 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.609 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:23.609 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:23.609 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.609 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.609 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:23.609 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.609 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:23.866 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.866 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:23.866 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:23.866 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:23.866 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:23.866 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:23.866 00:41:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:24.123 00:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:24.381 00:41:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:26:25.315 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:26:25.315 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:25.315 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.315 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:25.315 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.315 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:25.315 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.315 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:25.573 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:25.573 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:25.573 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:25.573 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.573 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.573 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:25.573 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.573 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:25.829 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:25.829 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:25.829 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:25.829 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:26.088 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:26.088 00:41:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:26.088 00:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:26.088 00:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:26.088 00:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:26.088 00:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:26:26.088 00:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:26.347 00:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:26.347 00:41:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:26:27.284 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:26:27.284 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:27.284 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.284 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:27.541 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.541 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:27.541 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.541 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:27.541 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:27.541 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:27.798 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:27.798 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:27.798 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:27.798 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:27.798 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:27.798 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.054 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:28.054 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:28.054 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:28.054 00:41:53 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.054 00:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.054 00:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:28.054 00:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:28.054 00:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:28.313 00:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:28.313 00:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:26:28.313 00:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:26:28.313 00:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:28.572 00:41:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:26:29.507 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:26:29.507 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:29.507 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:29.507 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.764 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:29.764 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:29.764 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:29.764 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.764 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:29.764 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:29.764 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:29.764 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:30.021 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.021 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:30.021 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.021 00:41:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:30.021 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.021 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:26:30.021 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:30.021 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.279 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:30.279 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:30.279 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:30.279 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:30.279 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:30.279 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:26:30.538 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:26:30.538 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:30.797 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:30.797 00:41:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:26:31.734 00:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:26:31.734 00:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:31.734 00:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.734 00:41:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:31.992 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:31.992 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:31.992 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:31.992 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:32.252 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.252 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:32.252 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.252 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:32.252 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.252 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:32.252 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.252 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:32.513 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.513 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:32.513 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.513 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:32.513 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.513 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:32.513 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:32.513 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:32.771 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:32.771 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:26:32.771 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:32.771 00:41:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:33.031 00:41:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:26:34.067 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:26:34.067 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:34.067 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:34.067 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.067 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:34.067 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:34.067 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.067 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:34.325 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.325 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:34.325 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.325 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:34.325 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.325 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:34.584 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:34.584 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.584 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.584 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:34.584 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.584 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:34.844 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.844 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:34.844 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:34.844 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:34.844 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:34.844 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:26:34.844 00:42:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:35.105 00:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:35.105 00:42:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:36.480 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:36.480 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:36.480 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.480 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:36.480 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.480 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:36.480 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.480 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:36.480 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.480 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:36.480 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:36.481 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.739 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.739 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:36.739 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.739 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:36.739 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.739 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:36.739 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.739 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:36.996 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.997 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:36.997 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:36.997 00:42:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:36.997 00:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:36.997 00:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:36.997 00:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:37.254 00:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:37.510 00:42:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:38.443 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:38.443 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:38.443 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:38.443 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.700 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.700 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:38.700 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.700 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:38.700 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:38.700 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:38.700 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:38.700 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.957 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.957 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:38.957 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:38.957 00:42:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:38.957 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:38.957 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:38.957 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:38.957 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.217 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:39.217 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:39.217 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:39.217 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:39.217 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:39.217 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2127990 00:26:39.217 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 2127990 ']' 00:26:39.217 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 2127990 00:26:39.475 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:26:39.475 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:39.475 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2127990 00:26:39.475 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_2 00:26:39.475 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_2 = sudo ']' 00:26:39.475 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2127990' 00:26:39.475 killing process with pid 2127990 00:26:39.475 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 2127990 00:26:39.475 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 2127990 00:26:39.475 Connection closed with partial response: 00:26:39.475 00:26:39.475 00:26:39.754 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2127990 00:26:39.754 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:39.754 [2024-05-15 00:41:40.084977] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:26:39.754 [2024-05-15 00:41:40.085097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2127990 ] 00:26:39.754 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.754 [2024-05-15 00:41:40.198647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.754 [2024-05-15 00:41:40.294251] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:39.754 Running I/O for 90 seconds... 00:26:39.754 [2024-05-15 00:41:52.259958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.754 [2024-05-15 00:41:52.260012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:39.754 [2024-05-15 00:41:52.260047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.754 [2024-05-15 00:41:52.260057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.754 [2024-05-15 00:41:52.260072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.754 [2024-05-15 00:41:52.260080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.754 [2024-05-15 00:41:52.260095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.754 [2024-05-15 00:41:52.260102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.754 [2024-05-15 00:41:52.260117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.754 [2024-05-15 00:41:52.260125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:39.754 [2024-05-15 00:41:52.260139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.754 [2024-05-15 00:41:52.260146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:39.754 [2024-05-15 00:41:52.260160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.754 [2024-05-15 00:41:52.260168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.754 [2024-05-15 00:41:52.260183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.754 [2024-05-15 00:41:52.260191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.754 [2024-05-15 00:41:52.260213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.754 [2024-05-15 00:41:52.260221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.754 [2024-05-15 00:41:52.260234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.754 [2024-05-15 00:41:52.260242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:39.754 [2024-05-15 00:41:52.260259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.260978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.260986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.261001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.261009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.261023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.261032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.261046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.261053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.261067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.261075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.261089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.261097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.261111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.261120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.261134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.261141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.261155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.261163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.261177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.261185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.755 [2024-05-15 00:41:52.261199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.755 [2024-05-15 00:41:52.261208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.261635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.261643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.262045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.262055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.262069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.262078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.262093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.262101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.262114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.262123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.262137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.262145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.262159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.262167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.262181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.262189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.262203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.262211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.262241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.262249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.262269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.262277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.262291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.262299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.756 [2024-05-15 00:41:52.262314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.756 [2024-05-15 00:41:52.262322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.757 [2024-05-15 00:41:52.262679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.757 [2024-05-15 00:41:52.262701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.757 [2024-05-15 00:41:52.262724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.757 [2024-05-15 00:41:52.262745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.757 [2024-05-15 00:41:52.262769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.757 [2024-05-15 00:41:52.262792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.757 [2024-05-15 00:41:52.262815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.757 [2024-05-15 00:41:52.262981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.757 [2024-05-15 00:41:52.262990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.263011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.263033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.263057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.263080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.263101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.263464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.758 [2024-05-15 00:41:52.263486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.263508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.263529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.263554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.263576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.263597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.263613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.263620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.264180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.264191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.264207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.264215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.264235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.264244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.264258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.758 [2024-05-15 00:41:52.264267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:39.758 [2024-05-15 00:41:52.264282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:39.759 [2024-05-15 00:41:52.264904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.759 [2024-05-15 00:41:52.264913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.264926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.264935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.264950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.264959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.264973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.264982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.264996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.265981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.265992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.266013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.266021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.266036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.266045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.266060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.266071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.760 [2024-05-15 00:41:52.266085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.760 [2024-05-15 00:41:52.266093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.761 [2024-05-15 00:41:52.266665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.761 [2024-05-15 00:41:52.266689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.761 [2024-05-15 00:41:52.266713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.761 [2024-05-15 00:41:52.266737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.761 [2024-05-15 00:41:52.266760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.761 [2024-05-15 00:41:52.266783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.761 [2024-05-15 00:41:52.266807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.761 [2024-05-15 00:41:52.266853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.761 [2024-05-15 00:41:52.266867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.266876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.266890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.266899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.266913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.266922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.266938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.266946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.266961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.266969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.266984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.266993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.267015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.267040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.267063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.267086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.267109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.267484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.762 [2024-05-15 00:41:52.267508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.267531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.267560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.267584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.267607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.267622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.267630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:39.762 [2024-05-15 00:41:52.268186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.762 [2024-05-15 00:41:52.268196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.763 [2024-05-15 00:41:52.268965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.763 [2024-05-15 00:41:52.268979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.268988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.269508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.269516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.270027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.270040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.270055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.270064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.270078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.270087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.270101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.270110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.270124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.270133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.270147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.270156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.270170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.270179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.270193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.270203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.764 [2024-05-15 00:41:52.270218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.764 [2024-05-15 00:41:52.270227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.765 [2024-05-15 00:41:52.270702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.765 [2024-05-15 00:41:52.270725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.765 [2024-05-15 00:41:52.270749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.765 [2024-05-15 00:41:52.270773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.765 [2024-05-15 00:41:52.270796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.765 [2024-05-15 00:41:52.270819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.765 [2024-05-15 00:41:52.270848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.765 [2024-05-15 00:41:52.270862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.270873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.270887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.270896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.270910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.270919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.270933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.270941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.270956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.270964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.270978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.270986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.271009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.271032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.271055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.271077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.271100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.271123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.271146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.271169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.271541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.766 [2024-05-15 00:41:52.271567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.271590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:39.766 [2024-05-15 00:41:52.271604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.766 [2024-05-15 00:41:52.271613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.271627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.271636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.271650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.271659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.767 [2024-05-15 00:41:52.272928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.767 [2024-05-15 00:41:52.272942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.272951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.272965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.272974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.272988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.272997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.273536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.273546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.274032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.274043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.274060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.274069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.274084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.274093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.274106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.274115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.274129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.274138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.274152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.274161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.768 [2024-05-15 00:41:52.274175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.768 [2024-05-15 00:41:52.274186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.769 [2024-05-15 00:41:52.274755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.769 [2024-05-15 00:41:52.274780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.769 [2024-05-15 00:41:52.274804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.769 [2024-05-15 00:41:52.274828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.769 [2024-05-15 00:41:52.274851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.769 [2024-05-15 00:41:52.274875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.769 [2024-05-15 00:41:52.274898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.769 [2024-05-15 00:41:52.274968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.769 [2024-05-15 00:41:52.274983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.770 [2024-05-15 00:41:52.274992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.770 [2024-05-15 00:41:52.275016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.770 [2024-05-15 00:41:52.275040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.770 [2024-05-15 00:41:52.275064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.770 [2024-05-15 00:41:52.275087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.770 [2024-05-15 00:41:52.275110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.770 [2024-05-15 00:41:52.275133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.770 [2024-05-15 00:41:52.275157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.770 [2024-05-15 00:41:52.275180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.770 [2024-05-15 00:41:52.275204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.770 [2024-05-15 00:41:52.275605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:39.770 [2024-05-15 00:41:52.275619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.770 [2024-05-15 00:41:52.275628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.275643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.275652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.275668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.275677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.275692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.275701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.771 [2024-05-15 00:41:52.276943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.771 [2024-05-15 00:41:52.276952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.276967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.276976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.276991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.277606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.277615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.278094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.278107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.278122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.278131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.278147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.278156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.278170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.278179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.278194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.278203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.278218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.278227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.772 [2024-05-15 00:41:52.278242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.772 [2024-05-15 00:41:52.278252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.278829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.773 [2024-05-15 00:41:52.278853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.773 [2024-05-15 00:41:52.278877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.773 [2024-05-15 00:41:52.278901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.773 [2024-05-15 00:41:52.278925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.773 [2024-05-15 00:41:52.278949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.773 [2024-05-15 00:41:52.278972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.278987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.773 [2024-05-15 00:41:52.278995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.279010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.279019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.279035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.279043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.279059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.279067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.773 [2024-05-15 00:41:52.279081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.773 [2024-05-15 00:41:52.279090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.279113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.279137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.279161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.279185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.279209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.279232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.279256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.279280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.279303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.279692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.774 [2024-05-15 00:41:52.279715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.279739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.279753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.279762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.280343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.280356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.280372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.280381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.280395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.280404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.280419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.280428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:39.774 [2024-05-15 00:41:52.280442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.774 [2024-05-15 00:41:52.280452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.280987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.280996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.281010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.281018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.281033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.281042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.281057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.281067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.281083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.281092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.281107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.281115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.281131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.281139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.775 [2024-05-15 00:41:52.281154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.775 [2024-05-15 00:41:52.281164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.281699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.281708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.776 [2024-05-15 00:41:52.282569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.776 [2024-05-15 00:41:52.282578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.282956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.282970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.777 [2024-05-15 00:41:52.282987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.777 [2024-05-15 00:41:52.283013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.777 [2024-05-15 00:41:52.283041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.777 [2024-05-15 00:41:52.283066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.777 [2024-05-15 00:41:52.283090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.777 [2024-05-15 00:41:52.283114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.777 [2024-05-15 00:41:52.283137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.283161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.283185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.283208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.283231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.283254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.283277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.283303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.283329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.283354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.283379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.283404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.283430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.777 [2024-05-15 00:41:52.283453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.777 [2024-05-15 00:41:52.283478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.777 [2024-05-15 00:41:52.283493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.283846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.778 [2024-05-15 00:41:52.283869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.283884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.283893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.778 [2024-05-15 00:41:52.284977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.778 [2024-05-15 00:41:52.284986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.285823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.285832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.286311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.286322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.286338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.286347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.286362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.286371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.286385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.286394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.286409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.286418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.286434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.286443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.286458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.779 [2024-05-15 00:41:52.286466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.779 [2024-05-15 00:41:52.286480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.286987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.286996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.287011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.287021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.287035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.287044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.287059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.287068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.287083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.780 [2024-05-15 00:41:52.287091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.287107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.780 [2024-05-15 00:41:52.287116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.287131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.780 [2024-05-15 00:41:52.287139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.287155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.780 [2024-05-15 00:41:52.287167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.287185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.780 [2024-05-15 00:41:52.287194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.287209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.780 [2024-05-15 00:41:52.287217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.287232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.780 [2024-05-15 00:41:52.287241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.287256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.780 [2024-05-15 00:41:52.287265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.780 [2024-05-15 00:41:52.287280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.287956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.287971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.781 [2024-05-15 00:41:52.287980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.288537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.288554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.288570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.288578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.288593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.288603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.288617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.288627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.288641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.288650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.288665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.288674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.288688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.288697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.288712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.781 [2024-05-15 00:41:52.288722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.781 [2024-05-15 00:41:52.288741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.288753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.288767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.288776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.288791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.288800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.288814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.288824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.288839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.288848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.288863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.288872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.288887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.288896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.288910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.288919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.288933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.288942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.288957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.288966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.288980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.288989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:39.782 [2024-05-15 00:41:52.289746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.782 [2024-05-15 00:41:52.289755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.289769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.289778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.289793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.289801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.289815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.289824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.289839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.289848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.289862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.289871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.289887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.289896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.290978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.290986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.291001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.291010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.291025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.291033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.291048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.291057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.291071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.291080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.291094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.291104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.291118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.291128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.291142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.291153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.291167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.783 [2024-05-15 00:41:52.291175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.291191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.783 [2024-05-15 00:41:52.291199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.291214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.783 [2024-05-15 00:41:52.291223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.291238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.783 [2024-05-15 00:41:52.291248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.783 [2024-05-15 00:41:52.291262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.291369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.291393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.291417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.291441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.291465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.291488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.291512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.291535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.291567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.291590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.291614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.291637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.291661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.291979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.291993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.292002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.292016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.292025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.292041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.292049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.292608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.784 [2024-05-15 00:41:52.292621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.292638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.292647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.292662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.292671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.292685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.292694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.292711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.292720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.292734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.784 [2024-05-15 00:41:52.292744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:39.784 [2024-05-15 00:41:52.292758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.292767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.292782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.292791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.292806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.292815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.292833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.292842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.292857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.292866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.292880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.292890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.292904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.292913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.292927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.292937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.292951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.292961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.292976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.292985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.292999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.785 [2024-05-15 00:41:52.293838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.785 [2024-05-15 00:41:52.293853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.293862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.293878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.293887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.293902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.293911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.293926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.293934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.293949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.293958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.294979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.294994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.295003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.295027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.295050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.295073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.295096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.295120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.295143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.295166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.295189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.295212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.295237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.295260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.786 [2024-05-15 00:41:52.295283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.786 [2024-05-15 00:41:52.295307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.786 [2024-05-15 00:41:52.295330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.786 [2024-05-15 00:41:52.295354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.786 [2024-05-15 00:41:52.295378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.786 [2024-05-15 00:41:52.295401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.786 [2024-05-15 00:41:52.295425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.786 [2024-05-15 00:41:52.295438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.786 [2024-05-15 00:41:52.295448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.295472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.295495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.295519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.295543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.295569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.295593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.295616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.295640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.295663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.295687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.295710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.295733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.295755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.295779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.295803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.295826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.295849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.295873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.295896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.295919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.295943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.295966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.295981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.295990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.296015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.296039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.296063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.296087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.296657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.787 [2024-05-15 00:41:52.296684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.296707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.296730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.296753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.296777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.296800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.296824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.296848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.296872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.296900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.296923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.296951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.296978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.296994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.297003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.297018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.297027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.297041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.297049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.297063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.297072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.297087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.297095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.297110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.787 [2024-05-15 00:41:52.297120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.787 [2024-05-15 00:41:52.297136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.297988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.788 [2024-05-15 00:41:52.297997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:39.788 [2024-05-15 00:41:52.298477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.298978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.298993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.789 [2024-05-15 00:41:52.299382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.789 [2024-05-15 00:41:52.299407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.789 [2024-05-15 00:41:52.299431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.789 [2024-05-15 00:41:52.299455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.789 [2024-05-15 00:41:52.299479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.789 [2024-05-15 00:41:52.299504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.789 [2024-05-15 00:41:52.299529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.789 [2024-05-15 00:41:52.299716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.789 [2024-05-15 00:41:52.299725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.299743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.299755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.299770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.299779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.299796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.299805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.299820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.299829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.299844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.299853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.299869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.299879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.299894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.299904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.299918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.299931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.299946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.299955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.299971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.299981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.299996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.300006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.300030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.300054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.300078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.300103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.300127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.300151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.300174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.300200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.300785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.300810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.790 [2024-05-15 00:41:52.300835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.300859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.300883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.300907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.300931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.300956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.300979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.300994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:39.790 [2024-05-15 00:41:52.301519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.790 [2024-05-15 00:41:52.301527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.301980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.301995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.302983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.791 [2024-05-15 00:41:52.302998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.791 [2024-05-15 00:41:52.303007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.303511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.303534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.303563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.303587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.303610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.303636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.303662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.792 [2024-05-15 00:41:52.303971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.303986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.303995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.304009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.304018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.304034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.304044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.304062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.304074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.304093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.304102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.304122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.304132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.304148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.304157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.304173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.304182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.304196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.304205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.304221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.304230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.304246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.792 [2024-05-15 00:41:52.304255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.792 [2024-05-15 00:41:52.304269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.793 [2024-05-15 00:41:52.304279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.304294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.793 [2024-05-15 00:41:52.304302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.304865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.793 [2024-05-15 00:41:52.304881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.304898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.793 [2024-05-15 00:41:52.304908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.304923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.304933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.304951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.793 [2024-05-15 00:41:52.304961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.304975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.304985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.304999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.305978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.305987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.306002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.306010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.306025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.306034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.306047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.306056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.306071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.306080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.306095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.306104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.306118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.306127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.306142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.793 [2024-05-15 00:41:52.306151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.793 [2024-05-15 00:41:52.306165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.306977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.306991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.794 [2024-05-15 00:41:52.307661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.794 [2024-05-15 00:41:52.307689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.794 [2024-05-15 00:41:52.307715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.794 [2024-05-15 00:41:52.307741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.794 [2024-05-15 00:41:52.307766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.794 [2024-05-15 00:41:52.307792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.794 [2024-05-15 00:41:52.307821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.307976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.307992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.794 [2024-05-15 00:41:52.308002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.794 [2024-05-15 00:41:52.308017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.308028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.308053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.308079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.308104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.308131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.308157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.308186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.308212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.308239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.308264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.308290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.308315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.308341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.308367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.308392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.308419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.308446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.308462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.308472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.309118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:83296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.309148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.309176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.795 [2024-05-15 00:41:52.309233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:83320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:83336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:83344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:83368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:83376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:83408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:83416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:83424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:83432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:83448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:83472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:83480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:83488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.309971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.309989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.310001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.310018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.310029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.310047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.310058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:39.795 [2024-05-15 00:41:52.310075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.795 [2024-05-15 00:41:52.310086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:83568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:83584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:83632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:83648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:83680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.310785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.310996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.796 [2024-05-15 00:41:52.311613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.796 [2024-05-15 00:41:52.311636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.311648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.311671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.311683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.311706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.311717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.311744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.311756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.311778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.311790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.311812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.311824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.311847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.311860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.311882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.311895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.311918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.311930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.311952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.311964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.311988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.797 [2024-05-15 00:41:52.312414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.797 [2024-05-15 00:41:52.312450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.797 [2024-05-15 00:41:52.312484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:83160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.797 [2024-05-15 00:41:52.312519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.797 [2024-05-15 00:41:52.312557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.797 [2024-05-15 00:41:52.312592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.797 [2024-05-15 00:41:52.312627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.312892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.312904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.313051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.313064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.313091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.313103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.313130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.313142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.313168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.313180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.313206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.797 [2024-05-15 00:41:52.313218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.313244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.797 [2024-05-15 00:41:52.313256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.313283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.797 [2024-05-15 00:41:52.313295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.313320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.797 [2024-05-15 00:41:52.313331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.313357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.797 [2024-05-15 00:41:52.313370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.313396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:83224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.797 [2024-05-15 00:41:52.313411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.797 [2024-05-15 00:41:52.313438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.797 [2024-05-15 00:41:52.313451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:41:52.313478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:41:52.313490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:41:52.313516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:41:52.313530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:41:52.313562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:83256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:41:52.313575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:41:52.313601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:83264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:41:52.313613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:41:52.313639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:41:52.313651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:41:52.313677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:41:52.313690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.445812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.798 [2024-05-15 00:42:03.445887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.445943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.798 [2024-05-15 00:42:03.445954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.445970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.798 [2024-05-15 00:42:03.445978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.445992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.798 [2024-05-15 00:42:03.446001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.798 [2024-05-15 00:42:03.446026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.798 [2024-05-15 00:42:03.446056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.798 [2024-05-15 00:42:03.446080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:49272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:49304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:49336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:49424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:49488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:49520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:49552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:49616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:49712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:49744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:49808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:49416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.446979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.446994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:49448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.447002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.447017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:49480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.447027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.447043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.447050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.447066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:49544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.447075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.447090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:49576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.447098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.447112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:49608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.447121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.447135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:49640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.447144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.447159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:49672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.447166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.447182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:49704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.447189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.447205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:49736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.447213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.447228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.447236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.447250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:49800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.447258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.447273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.798 [2024-05-15 00:42:03.447282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:39.798 [2024-05-15 00:42:03.447297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.799 [2024-05-15 00:42:03.447305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.447320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.799 [2024-05-15 00:42:03.447328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.447345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.799 [2024-05-15 00:42:03.447355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.447792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.799 [2024-05-15 00:42:03.447804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.447820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.799 [2024-05-15 00:42:03.447829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.447844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:39.799 [2024-05-15 00:42:03.447852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.447868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.447878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.447893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.447901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.447916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.447925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.447939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.447948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.447962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:49976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.447971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.447985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:50008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.447994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.448009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:50040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.448018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.448033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:50072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.448041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.448056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.448065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.448082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.448090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.448105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.448113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.448128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.448136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.448152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:49984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.448160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.448174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:50016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.448182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:39.799 [2024-05-15 00:42:03.448197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:50048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.799 [2024-05-15 00:42:03.448207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:39.799 Received shutdown signal, test time was about 23.863849 seconds 00:26:39.799 00:26:39.799 Latency(us) 00:26:39.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.799 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:39.799 Verification LBA range: start 0x0 length 0x4000 00:26:39.799 Nvme0n1 : 23.86 10844.27 42.36 0.00 0.00 11785.91 1319.34 3072879.56 00:26:39.799 =================================================================================================================== 00:26:39.799 Total : 10844.27 42.36 0.00 0.00 11785.91 1319.34 3072879.56 00:26:39.799 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:40.057 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:40.057 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:40.057 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:40.057 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:40.057 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:40.057 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:40.057 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:40.057 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:40.057 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:40.057 rmmod nvme_tcp 00:26:40.057 rmmod nvme_fabrics 00:26:40.057 rmmod nvme_keyring 00:26:40.057 00:42:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2127606 ']' 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2127606 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@947 -- # '[' -z 2127606 ']' 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@951 -- # kill -0 2127606 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # uname 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2127606 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2127606' 00:26:40.057 killing process with pid 2127606 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # kill 2127606 00:26:40.057 [2024-05-15 00:42:06.047711] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:26:40.057 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # wait 2127606 00:26:40.623 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:40.623 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:40.623 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:40.623 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:40.623 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:40.623 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.623 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.623 00:42:06 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.528 00:42:08 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:42.528 00:26:42.528 real 0m35.580s 00:26:42.528 user 1m33.202s 00:26:42.528 sys 0m8.718s 00:26:42.528 00:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # xtrace_disable 00:26:42.528 00:42:08 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:42.528 ************************************ 00:26:42.528 END TEST nvmf_host_multipath_status 00:26:42.528 ************************************ 00:26:42.528 00:42:08 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:42.528 00:42:08 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:26:42.528 00:42:08 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:26:42.528 00:42:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:42.787 ************************************ 00:26:42.787 START TEST nvmf_discovery_remove_ifc 00:26:42.787 ************************************ 00:26:42.787 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:42.787 * Looking for test storage... 00:26:42.787 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:26:42.787 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.787 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:42.787 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.787 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.787 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.787 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.787 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.787 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.787 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.787 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.787 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.787 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:42.788 00:42:08 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:49.351 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:49.351 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.351 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:49.352 Found net devices under 0000:27:00.0: cvl_0_0 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:49.352 Found net devices under 0000:27:00.1: cvl_0_1 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:49.352 00:42:14 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:49.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.566 ms 00:26:49.352 00:26:49.352 --- 10.0.0.2 ping statistics --- 00:26:49.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.352 rtt min/avg/max/mdev = 0.566/0.566/0.566/0.000 ms 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:49.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.306 ms 00:26:49.352 00:26:49.352 --- 10.0.0.1 ping statistics --- 00:26:49.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.352 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@721 -- # xtrace_disable 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2137824 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2137824 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 2137824 ']' 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:49.352 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.352 [2024-05-15 00:42:15.184487] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:26:49.352 [2024-05-15 00:42:15.184621] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.352 EAL: No free 2048 kB hugepages reported on node 1 00:26:49.352 [2024-05-15 00:42:15.326887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.352 [2024-05-15 00:42:15.437809] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.352 [2024-05-15 00:42:15.437846] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.352 [2024-05-15 00:42:15.437857] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.352 [2024-05-15 00:42:15.437868] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.352 [2024-05-15 00:42:15.437876] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.352 [2024-05-15 00:42:15.437911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.917 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:49.917 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:26:49.917 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:49.917 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@727 -- # xtrace_disable 00:26:49.917 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.918 [2024-05-15 00:42:15.926392] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.918 [2024-05-15 00:42:15.934328] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:26:49.918 [2024-05-15 00:42:15.934580] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:49.918 null0 00:26:49.918 [2024-05-15 00:42:15.966456] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2138027 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2138027 /tmp/host.sock 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@828 -- # '[' -z 2138027 ']' 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@832 -- # local rpc_addr=/tmp/host.sock 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local max_retries=100 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:49.918 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # xtrace_disable 00:26:49.918 00:42:15 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:49.918 [2024-05-15 00:42:16.059701] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:26:49.918 [2024-05-15 00:42:16.059802] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138027 ] 00:26:50.176 EAL: No free 2048 kB hugepages reported on node 1 00:26:50.176 [2024-05-15 00:42:16.170977] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:50.176 [2024-05-15 00:42:16.267199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:50.745 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:26:50.745 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@861 -- # return 0 00:26:50.745 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:50.745 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:50.745 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.745 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:50.745 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:50.745 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:50.745 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:50.745 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.005 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:51.005 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:51.005 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:51.005 00:42:16 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:51.940 [2024-05-15 00:42:17.981888] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:51.940 [2024-05-15 00:42:17.981918] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:51.940 [2024-05-15 00:42:17.981953] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:51.940 [2024-05-15 00:42:18.069981] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:52.198 [2024-05-15 00:42:18.170768] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:52.198 [2024-05-15 00:42:18.170825] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:52.198 [2024-05-15 00:42:18.170864] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:52.198 [2024-05-15 00:42:18.170884] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:52.198 [2024-05-15 00:42:18.170909] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.198 [2024-05-15 00:42:18.178422] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150003a1400 was disconnected and freed. delete nvme_qpair. 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:52.198 00:42:18 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:53.575 00:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:53.575 00:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:53.575 00:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.575 00:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:53.575 00:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:53.575 00:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:53.575 00:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:53.575 00:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:53.575 00:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:53.575 00:42:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:54.510 00:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:54.510 00:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.510 00:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:54.510 00:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:54.510 00:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:54.510 00:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:54.510 00:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:54.510 00:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:54.510 00:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:54.510 00:42:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:55.449 00:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:55.449 00:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:55.449 00:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:55.449 00:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:55.449 00:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:55.449 00:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:55.449 00:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:55.449 00:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:55.449 00:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:55.449 00:42:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:56.383 00:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:56.383 00:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:56.383 00:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:56.383 00:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.383 00:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:56.383 00:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:56.383 00:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:56.383 00:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:56.383 00:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:56.383 00:42:22 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:57.759 00:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:57.759 00:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:57.759 00:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:57.759 00:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:57.759 00:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:57.759 00:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:57.759 00:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:57.759 00:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:57.759 00:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:57.759 00:42:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:57.759 [2024-05-15 00:42:23.599359] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:57.759 [2024-05-15 00:42:23.599426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.759 [2024-05-15 00:42:23.599441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.759 [2024-05-15 00:42:23.599456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.759 [2024-05-15 00:42:23.599465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.759 [2024-05-15 00:42:23.599475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.759 [2024-05-15 00:42:23.599483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.759 [2024-05-15 00:42:23.599492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.759 [2024-05-15 00:42:23.599500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.759 [2024-05-15 00:42:23.599510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:57.759 [2024-05-15 00:42:23.599525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:57.759 [2024-05-15 00:42:23.599537] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a1180 is same with the state(5) to be set 00:26:57.759 [2024-05-15 00:42:23.609352] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1180 (9): Bad file descriptor 00:26:57.759 [2024-05-15 00:42:23.619373] nvme_ctrlr.c:1652:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:58.739 00:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:58.739 00:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:58.739 00:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:58.739 00:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:58.739 00:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:58.739 00:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:26:58.739 00:42:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:58.739 [2024-05-15 00:42:24.663598] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:59.677 [2024-05-15 00:42:25.687624] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:59.677 [2024-05-15 00:42:25.687698] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6150003a1180 with addr=10.0.0.2, port=4420 00:26:59.677 [2024-05-15 00:42:25.687724] nvme_tcp.c: 323:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6150003a1180 is same with the state(5) to be set 00:26:59.677 [2024-05-15 00:42:25.688397] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a1180 (9): Bad file descriptor 00:26:59.677 [2024-05-15 00:42:25.688433] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:59.677 [2024-05-15 00:42:25.688480] bdev_nvme.c:6718:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:59.677 [2024-05-15 00:42:25.688524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.677 [2024-05-15 00:42:25.688546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.677 [2024-05-15 00:42:25.688588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.677 [2024-05-15 00:42:25.688604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.677 [2024-05-15 00:42:25.688620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.677 [2024-05-15 00:42:25.688636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.677 [2024-05-15 00:42:25.688651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.677 [2024-05-15 00:42:25.688665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.677 [2024-05-15 00:42:25.688682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:59.677 [2024-05-15 00:42:25.688698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:59.677 [2024-05-15 00:42:25.688713] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:59.677 [2024-05-15 00:42:25.688824] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a0f00 (9): Bad file descriptor 00:26:59.677 [2024-05-15 00:42:25.689812] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:59.677 [2024-05-15 00:42:25.689836] nvme_ctrlr.c:1149:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:59.677 00:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:26:59.677 00:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:59.677 00:42:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:00.612 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.612 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.612 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.612 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.612 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.612 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.612 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.612 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.612 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:00.612 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:00.612 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:00.872 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:00.872 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:00.872 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:00.872 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:00.872 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:00.872 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:00.872 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:00.872 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:00.872 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:00.872 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:00.872 00:42:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.807 [2024-05-15 00:42:27.701834] bdev_nvme.c:6967:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:01.807 [2024-05-15 00:42:27.701860] bdev_nvme.c:7047:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:01.807 [2024-05-15 00:42:27.701878] bdev_nvme.c:6930:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:01.807 [2024-05-15 00:42:27.788947] bdev_nvme.c:6896:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:01.807 00:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:01.807 00:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:01.807 00:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:01.807 00:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:01.807 00:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:01.807 00:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:01.807 00:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:01.807 00:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:01.807 [2024-05-15 00:42:27.891887] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:01.807 [2024-05-15 00:42:27.891937] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:01.807 [2024-05-15 00:42:27.891972] bdev_nvme.c:7757:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:01.807 [2024-05-15 00:42:27.891992] bdev_nvme.c:6786:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:01.807 [2024-05-15 00:42:27.892004] bdev_nvme.c:6745:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:01.807 00:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:01.807 00:42:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:01.807 [2024-05-15 00:42:27.900093] bdev_nvme.c:1607:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x6150003a1b80 was disconnected and freed. delete nvme_qpair. 00:27:02.743 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:02.743 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:02.743 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:02.743 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:02.743 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:02.743 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:02.743 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2138027 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 2138027 ']' 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 2138027 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2138027 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2138027' 00:27:03.003 killing process with pid 2138027 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 2138027 00:27:03.003 00:42:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 2138027 00:27:03.263 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:03.263 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:03.263 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:03.263 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:03.263 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:03.263 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:03.263 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:03.263 rmmod nvme_tcp 00:27:03.263 rmmod nvme_fabrics 00:27:03.522 rmmod nvme_keyring 00:27:03.522 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:03.522 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:03.522 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:03.522 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2137824 ']' 00:27:03.522 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2137824 00:27:03.522 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@947 -- # '[' -z 2137824 ']' 00:27:03.522 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@951 -- # kill -0 2137824 00:27:03.522 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # uname 00:27:03.523 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:27:03.523 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2137824 00:27:03.523 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:27:03.523 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:27:03.523 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2137824' 00:27:03.523 killing process with pid 2137824 00:27:03.523 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # kill 2137824 00:27:03.523 [2024-05-15 00:42:29.523167] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:27:03.523 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # wait 2137824 00:27:04.088 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:04.088 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:04.088 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:04.088 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:04.088 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:04.088 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.088 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.088 00:42:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:05.991 00:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:05.992 00:27:05.992 real 0m23.304s 00:27:05.992 user 0m28.016s 00:27:05.992 sys 0m6.030s 00:27:05.992 00:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:05.992 00:42:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:05.992 ************************************ 00:27:05.992 END TEST nvmf_discovery_remove_ifc 00:27:05.992 ************************************ 00:27:05.992 00:42:32 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:05.992 00:42:32 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:05.992 00:42:32 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:05.992 00:42:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:05.992 ************************************ 00:27:05.992 START TEST nvmf_identify_kernel_target 00:27:05.992 ************************************ 00:27:05.992 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:06.249 * Looking for test storage... 00:27:06.249 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.249 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:06.250 00:42:32 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:11.512 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:27:11.513 Found 0000:27:00.0 (0x8086 - 0x159b) 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:27:11.513 Found 0000:27:00.1 (0x8086 - 0x159b) 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:27:11.513 Found net devices under 0000:27:00.0: cvl_0_0 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:27:11.513 Found net devices under 0000:27:00.1: cvl_0_1 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:11.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:11.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:27:11.513 00:27:11.513 --- 10.0.0.2 ping statistics --- 00:27:11.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.513 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:11.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:11.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:27:11.513 00:27:11.513 --- 10.0.0.1 ping statistics --- 00:27:11.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:11.513 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:11.513 00:42:37 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:27:14.051 Waiting for block devices as requested 00:27:14.051 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:27:14.310 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:14.310 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:14.567 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:14.567 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:27:14.825 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:14.825 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:27:14.825 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:15.082 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:27:15.082 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:15.340 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:27:15.340 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:27:15.340 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:27:15.600 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:27:15.858 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:15.858 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:27:15.858 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:16.116 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:16.681 No valid GPT data, bailing 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:27:16.681 No valid GPT data, bailing 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.1 -t tcp -s 4420 00:27:16.681 00:27:16.681 Discovery Log Number of Records 2, Generation counter 2 00:27:16.681 =====Discovery Log Entry 0====== 00:27:16.681 trtype: tcp 00:27:16.681 adrfam: ipv4 00:27:16.681 subtype: current discovery subsystem 00:27:16.681 treq: not specified, sq flow control disable supported 00:27:16.681 portid: 1 00:27:16.681 trsvcid: 4420 00:27:16.681 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:16.681 traddr: 10.0.0.1 00:27:16.681 eflags: none 00:27:16.681 sectype: none 00:27:16.681 =====Discovery Log Entry 1====== 00:27:16.681 trtype: tcp 00:27:16.681 adrfam: ipv4 00:27:16.681 subtype: nvme subsystem 00:27:16.681 treq: not specified, sq flow control disable supported 00:27:16.681 portid: 1 00:27:16.681 trsvcid: 4420 00:27:16.681 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:16.681 traddr: 10.0.0.1 00:27:16.681 eflags: none 00:27:16.681 sectype: none 00:27:16.681 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:16.681 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:16.681 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.940 ===================================================== 00:27:16.940 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:16.940 ===================================================== 00:27:16.940 Controller Capabilities/Features 00:27:16.940 ================================ 00:27:16.940 Vendor ID: 0000 00:27:16.941 Subsystem Vendor ID: 0000 00:27:16.941 Serial Number: fff8c435bc36db76f513 00:27:16.941 Model Number: Linux 00:27:16.941 Firmware Version: 6.7.0-68 00:27:16.941 Recommended Arb Burst: 0 00:27:16.941 IEEE OUI Identifier: 00 00 00 00:27:16.941 Multi-path I/O 00:27:16.941 May have multiple subsystem ports: No 00:27:16.941 May have multiple controllers: No 00:27:16.941 Associated with SR-IOV VF: No 00:27:16.941 Max Data Transfer Size: Unlimited 00:27:16.941 Max Number of Namespaces: 0 00:27:16.941 Max Number of I/O Queues: 1024 00:27:16.941 NVMe Specification Version (VS): 1.3 00:27:16.941 NVMe Specification Version (Identify): 1.3 00:27:16.941 Maximum Queue Entries: 1024 00:27:16.941 Contiguous Queues Required: No 00:27:16.941 Arbitration Mechanisms Supported 00:27:16.941 Weighted Round Robin: Not Supported 00:27:16.941 Vendor Specific: Not Supported 00:27:16.941 Reset Timeout: 7500 ms 00:27:16.941 Doorbell Stride: 4 bytes 00:27:16.941 NVM Subsystem Reset: Not Supported 00:27:16.941 Command Sets Supported 00:27:16.941 NVM Command Set: Supported 00:27:16.941 Boot Partition: Not Supported 00:27:16.941 Memory Page Size Minimum: 4096 bytes 00:27:16.941 Memory Page Size Maximum: 4096 bytes 00:27:16.941 Persistent Memory Region: Not Supported 00:27:16.941 Optional Asynchronous Events Supported 00:27:16.941 Namespace Attribute Notices: Not Supported 00:27:16.941 Firmware Activation Notices: Not Supported 00:27:16.941 ANA Change Notices: Not Supported 00:27:16.941 PLE Aggregate Log Change Notices: Not Supported 00:27:16.941 LBA Status Info Alert Notices: Not Supported 00:27:16.941 EGE Aggregate Log Change Notices: Not Supported 00:27:16.941 Normal NVM Subsystem Shutdown event: Not Supported 00:27:16.941 Zone Descriptor Change Notices: Not Supported 00:27:16.941 Discovery Log Change Notices: Supported 00:27:16.941 Controller Attributes 00:27:16.941 128-bit Host Identifier: Not Supported 00:27:16.941 Non-Operational Permissive Mode: Not Supported 00:27:16.941 NVM Sets: Not Supported 00:27:16.941 Read Recovery Levels: Not Supported 00:27:16.941 Endurance Groups: Not Supported 00:27:16.941 Predictable Latency Mode: Not Supported 00:27:16.941 Traffic Based Keep ALive: Not Supported 00:27:16.941 Namespace Granularity: Not Supported 00:27:16.941 SQ Associations: Not Supported 00:27:16.941 UUID List: Not Supported 00:27:16.941 Multi-Domain Subsystem: Not Supported 00:27:16.941 Fixed Capacity Management: Not Supported 00:27:16.941 Variable Capacity Management: Not Supported 00:27:16.941 Delete Endurance Group: Not Supported 00:27:16.941 Delete NVM Set: Not Supported 00:27:16.941 Extended LBA Formats Supported: Not Supported 00:27:16.941 Flexible Data Placement Supported: Not Supported 00:27:16.941 00:27:16.941 Controller Memory Buffer Support 00:27:16.941 ================================ 00:27:16.941 Supported: No 00:27:16.941 00:27:16.941 Persistent Memory Region Support 00:27:16.941 ================================ 00:27:16.941 Supported: No 00:27:16.941 00:27:16.941 Admin Command Set Attributes 00:27:16.941 ============================ 00:27:16.941 Security Send/Receive: Not Supported 00:27:16.941 Format NVM: Not Supported 00:27:16.941 Firmware Activate/Download: Not Supported 00:27:16.941 Namespace Management: Not Supported 00:27:16.941 Device Self-Test: Not Supported 00:27:16.941 Directives: Not Supported 00:27:16.941 NVMe-MI: Not Supported 00:27:16.941 Virtualization Management: Not Supported 00:27:16.941 Doorbell Buffer Config: Not Supported 00:27:16.941 Get LBA Status Capability: Not Supported 00:27:16.941 Command & Feature Lockdown Capability: Not Supported 00:27:16.941 Abort Command Limit: 1 00:27:16.941 Async Event Request Limit: 1 00:27:16.941 Number of Firmware Slots: N/A 00:27:16.941 Firmware Slot 1 Read-Only: N/A 00:27:16.941 Firmware Activation Without Reset: N/A 00:27:16.941 Multiple Update Detection Support: N/A 00:27:16.941 Firmware Update Granularity: No Information Provided 00:27:16.941 Per-Namespace SMART Log: No 00:27:16.941 Asymmetric Namespace Access Log Page: Not Supported 00:27:16.941 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:16.941 Command Effects Log Page: Not Supported 00:27:16.941 Get Log Page Extended Data: Supported 00:27:16.941 Telemetry Log Pages: Not Supported 00:27:16.941 Persistent Event Log Pages: Not Supported 00:27:16.941 Supported Log Pages Log Page: May Support 00:27:16.941 Commands Supported & Effects Log Page: Not Supported 00:27:16.941 Feature Identifiers & Effects Log Page:May Support 00:27:16.941 NVMe-MI Commands & Effects Log Page: May Support 00:27:16.941 Data Area 4 for Telemetry Log: Not Supported 00:27:16.941 Error Log Page Entries Supported: 1 00:27:16.941 Keep Alive: Not Supported 00:27:16.941 00:27:16.941 NVM Command Set Attributes 00:27:16.941 ========================== 00:27:16.941 Submission Queue Entry Size 00:27:16.941 Max: 1 00:27:16.941 Min: 1 00:27:16.941 Completion Queue Entry Size 00:27:16.941 Max: 1 00:27:16.941 Min: 1 00:27:16.941 Number of Namespaces: 0 00:27:16.941 Compare Command: Not Supported 00:27:16.941 Write Uncorrectable Command: Not Supported 00:27:16.941 Dataset Management Command: Not Supported 00:27:16.941 Write Zeroes Command: Not Supported 00:27:16.941 Set Features Save Field: Not Supported 00:27:16.941 Reservations: Not Supported 00:27:16.941 Timestamp: Not Supported 00:27:16.941 Copy: Not Supported 00:27:16.941 Volatile Write Cache: Not Present 00:27:16.941 Atomic Write Unit (Normal): 1 00:27:16.941 Atomic Write Unit (PFail): 1 00:27:16.941 Atomic Compare & Write Unit: 1 00:27:16.941 Fused Compare & Write: Not Supported 00:27:16.941 Scatter-Gather List 00:27:16.941 SGL Command Set: Supported 00:27:16.941 SGL Keyed: Not Supported 00:27:16.941 SGL Bit Bucket Descriptor: Not Supported 00:27:16.941 SGL Metadata Pointer: Not Supported 00:27:16.941 Oversized SGL: Not Supported 00:27:16.941 SGL Metadata Address: Not Supported 00:27:16.941 SGL Offset: Supported 00:27:16.941 Transport SGL Data Block: Not Supported 00:27:16.941 Replay Protected Memory Block: Not Supported 00:27:16.941 00:27:16.941 Firmware Slot Information 00:27:16.941 ========================= 00:27:16.941 Active slot: 0 00:27:16.941 00:27:16.941 00:27:16.941 Error Log 00:27:16.941 ========= 00:27:16.941 00:27:16.941 Active Namespaces 00:27:16.941 ================= 00:27:16.941 Discovery Log Page 00:27:16.941 ================== 00:27:16.941 Generation Counter: 2 00:27:16.941 Number of Records: 2 00:27:16.941 Record Format: 0 00:27:16.941 00:27:16.941 Discovery Log Entry 0 00:27:16.941 ---------------------- 00:27:16.941 Transport Type: 3 (TCP) 00:27:16.941 Address Family: 1 (IPv4) 00:27:16.941 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:16.941 Entry Flags: 00:27:16.941 Duplicate Returned Information: 0 00:27:16.941 Explicit Persistent Connection Support for Discovery: 0 00:27:16.941 Transport Requirements: 00:27:16.941 Secure Channel: Not Specified 00:27:16.941 Port ID: 1 (0x0001) 00:27:16.941 Controller ID: 65535 (0xffff) 00:27:16.941 Admin Max SQ Size: 32 00:27:16.941 Transport Service Identifier: 4420 00:27:16.941 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:16.941 Transport Address: 10.0.0.1 00:27:16.941 Discovery Log Entry 1 00:27:16.941 ---------------------- 00:27:16.941 Transport Type: 3 (TCP) 00:27:16.941 Address Family: 1 (IPv4) 00:27:16.941 Subsystem Type: 2 (NVM Subsystem) 00:27:16.941 Entry Flags: 00:27:16.941 Duplicate Returned Information: 0 00:27:16.941 Explicit Persistent Connection Support for Discovery: 0 00:27:16.941 Transport Requirements: 00:27:16.941 Secure Channel: Not Specified 00:27:16.941 Port ID: 1 (0x0001) 00:27:16.941 Controller ID: 65535 (0xffff) 00:27:16.941 Admin Max SQ Size: 32 00:27:16.941 Transport Service Identifier: 4420 00:27:16.941 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:16.941 Transport Address: 10.0.0.1 00:27:16.941 00:42:42 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:16.941 EAL: No free 2048 kB hugepages reported on node 1 00:27:16.941 get_feature(0x01) failed 00:27:16.941 get_feature(0x02) failed 00:27:16.941 get_feature(0x04) failed 00:27:16.941 ===================================================== 00:27:16.941 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:16.941 ===================================================== 00:27:16.941 Controller Capabilities/Features 00:27:16.941 ================================ 00:27:16.941 Vendor ID: 0000 00:27:16.941 Subsystem Vendor ID: 0000 00:27:16.941 Serial Number: fd7d3eccd09414dcac58 00:27:16.941 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:16.942 Firmware Version: 6.7.0-68 00:27:16.942 Recommended Arb Burst: 6 00:27:16.942 IEEE OUI Identifier: 00 00 00 00:27:16.942 Multi-path I/O 00:27:16.942 May have multiple subsystem ports: Yes 00:27:16.942 May have multiple controllers: Yes 00:27:16.942 Associated with SR-IOV VF: No 00:27:16.942 Max Data Transfer Size: Unlimited 00:27:16.942 Max Number of Namespaces: 1024 00:27:16.942 Max Number of I/O Queues: 128 00:27:16.942 NVMe Specification Version (VS): 1.3 00:27:16.942 NVMe Specification Version (Identify): 1.3 00:27:16.942 Maximum Queue Entries: 1024 00:27:16.942 Contiguous Queues Required: No 00:27:16.942 Arbitration Mechanisms Supported 00:27:16.942 Weighted Round Robin: Not Supported 00:27:16.942 Vendor Specific: Not Supported 00:27:16.942 Reset Timeout: 7500 ms 00:27:16.942 Doorbell Stride: 4 bytes 00:27:16.942 NVM Subsystem Reset: Not Supported 00:27:16.942 Command Sets Supported 00:27:16.942 NVM Command Set: Supported 00:27:16.942 Boot Partition: Not Supported 00:27:16.942 Memory Page Size Minimum: 4096 bytes 00:27:16.942 Memory Page Size Maximum: 4096 bytes 00:27:16.942 Persistent Memory Region: Not Supported 00:27:16.942 Optional Asynchronous Events Supported 00:27:16.942 Namespace Attribute Notices: Supported 00:27:16.942 Firmware Activation Notices: Not Supported 00:27:16.942 ANA Change Notices: Supported 00:27:16.942 PLE Aggregate Log Change Notices: Not Supported 00:27:16.942 LBA Status Info Alert Notices: Not Supported 00:27:16.942 EGE Aggregate Log Change Notices: Not Supported 00:27:16.942 Normal NVM Subsystem Shutdown event: Not Supported 00:27:16.942 Zone Descriptor Change Notices: Not Supported 00:27:16.942 Discovery Log Change Notices: Not Supported 00:27:16.942 Controller Attributes 00:27:16.942 128-bit Host Identifier: Supported 00:27:16.942 Non-Operational Permissive Mode: Not Supported 00:27:16.942 NVM Sets: Not Supported 00:27:16.942 Read Recovery Levels: Not Supported 00:27:16.942 Endurance Groups: Not Supported 00:27:16.942 Predictable Latency Mode: Not Supported 00:27:16.942 Traffic Based Keep ALive: Supported 00:27:16.942 Namespace Granularity: Not Supported 00:27:16.942 SQ Associations: Not Supported 00:27:16.942 UUID List: Not Supported 00:27:16.942 Multi-Domain Subsystem: Not Supported 00:27:16.942 Fixed Capacity Management: Not Supported 00:27:16.942 Variable Capacity Management: Not Supported 00:27:16.942 Delete Endurance Group: Not Supported 00:27:16.942 Delete NVM Set: Not Supported 00:27:16.942 Extended LBA Formats Supported: Not Supported 00:27:16.942 Flexible Data Placement Supported: Not Supported 00:27:16.942 00:27:16.942 Controller Memory Buffer Support 00:27:16.942 ================================ 00:27:16.942 Supported: No 00:27:16.942 00:27:16.942 Persistent Memory Region Support 00:27:16.942 ================================ 00:27:16.942 Supported: No 00:27:16.942 00:27:16.942 Admin Command Set Attributes 00:27:16.942 ============================ 00:27:16.942 Security Send/Receive: Not Supported 00:27:16.942 Format NVM: Not Supported 00:27:16.942 Firmware Activate/Download: Not Supported 00:27:16.942 Namespace Management: Not Supported 00:27:16.942 Device Self-Test: Not Supported 00:27:16.942 Directives: Not Supported 00:27:16.942 NVMe-MI: Not Supported 00:27:16.942 Virtualization Management: Not Supported 00:27:16.942 Doorbell Buffer Config: Not Supported 00:27:16.942 Get LBA Status Capability: Not Supported 00:27:16.942 Command & Feature Lockdown Capability: Not Supported 00:27:16.942 Abort Command Limit: 4 00:27:16.942 Async Event Request Limit: 4 00:27:16.942 Number of Firmware Slots: N/A 00:27:16.942 Firmware Slot 1 Read-Only: N/A 00:27:16.942 Firmware Activation Without Reset: N/A 00:27:16.942 Multiple Update Detection Support: N/A 00:27:16.942 Firmware Update Granularity: No Information Provided 00:27:16.942 Per-Namespace SMART Log: Yes 00:27:16.942 Asymmetric Namespace Access Log Page: Supported 00:27:16.942 ANA Transition Time : 10 sec 00:27:16.942 00:27:16.942 Asymmetric Namespace Access Capabilities 00:27:16.942 ANA Optimized State : Supported 00:27:16.942 ANA Non-Optimized State : Supported 00:27:16.942 ANA Inaccessible State : Supported 00:27:16.942 ANA Persistent Loss State : Supported 00:27:16.942 ANA Change State : Supported 00:27:16.942 ANAGRPID is not changed : No 00:27:16.942 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:16.942 00:27:16.942 ANA Group Identifier Maximum : 128 00:27:16.942 Number of ANA Group Identifiers : 128 00:27:16.942 Max Number of Allowed Namespaces : 1024 00:27:16.942 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:16.942 Command Effects Log Page: Supported 00:27:16.942 Get Log Page Extended Data: Supported 00:27:16.942 Telemetry Log Pages: Not Supported 00:27:16.942 Persistent Event Log Pages: Not Supported 00:27:16.942 Supported Log Pages Log Page: May Support 00:27:16.942 Commands Supported & Effects Log Page: Not Supported 00:27:16.942 Feature Identifiers & Effects Log Page:May Support 00:27:16.942 NVMe-MI Commands & Effects Log Page: May Support 00:27:16.942 Data Area 4 for Telemetry Log: Not Supported 00:27:16.942 Error Log Page Entries Supported: 128 00:27:16.942 Keep Alive: Supported 00:27:16.942 Keep Alive Granularity: 1000 ms 00:27:16.942 00:27:16.942 NVM Command Set Attributes 00:27:16.942 ========================== 00:27:16.942 Submission Queue Entry Size 00:27:16.942 Max: 64 00:27:16.942 Min: 64 00:27:16.942 Completion Queue Entry Size 00:27:16.942 Max: 16 00:27:16.942 Min: 16 00:27:16.942 Number of Namespaces: 1024 00:27:16.942 Compare Command: Not Supported 00:27:16.942 Write Uncorrectable Command: Not Supported 00:27:16.942 Dataset Management Command: Supported 00:27:16.942 Write Zeroes Command: Supported 00:27:16.942 Set Features Save Field: Not Supported 00:27:16.942 Reservations: Not Supported 00:27:16.942 Timestamp: Not Supported 00:27:16.942 Copy: Not Supported 00:27:16.942 Volatile Write Cache: Present 00:27:16.942 Atomic Write Unit (Normal): 1 00:27:16.942 Atomic Write Unit (PFail): 1 00:27:16.942 Atomic Compare & Write Unit: 1 00:27:16.942 Fused Compare & Write: Not Supported 00:27:16.942 Scatter-Gather List 00:27:16.942 SGL Command Set: Supported 00:27:16.942 SGL Keyed: Not Supported 00:27:16.942 SGL Bit Bucket Descriptor: Not Supported 00:27:16.942 SGL Metadata Pointer: Not Supported 00:27:16.942 Oversized SGL: Not Supported 00:27:16.942 SGL Metadata Address: Not Supported 00:27:16.942 SGL Offset: Supported 00:27:16.942 Transport SGL Data Block: Not Supported 00:27:16.942 Replay Protected Memory Block: Not Supported 00:27:16.942 00:27:16.942 Firmware Slot Information 00:27:16.942 ========================= 00:27:16.942 Active slot: 0 00:27:16.942 00:27:16.942 Asymmetric Namespace Access 00:27:16.942 =========================== 00:27:16.942 Change Count : 0 00:27:16.942 Number of ANA Group Descriptors : 1 00:27:16.942 ANA Group Descriptor : 0 00:27:16.942 ANA Group ID : 1 00:27:16.942 Number of NSID Values : 1 00:27:16.942 Change Count : 0 00:27:16.942 ANA State : 1 00:27:16.942 Namespace Identifier : 1 00:27:16.942 00:27:16.942 Commands Supported and Effects 00:27:16.942 ============================== 00:27:16.942 Admin Commands 00:27:16.942 -------------- 00:27:16.942 Get Log Page (02h): Supported 00:27:16.942 Identify (06h): Supported 00:27:16.942 Abort (08h): Supported 00:27:16.942 Set Features (09h): Supported 00:27:16.942 Get Features (0Ah): Supported 00:27:16.942 Asynchronous Event Request (0Ch): Supported 00:27:16.942 Keep Alive (18h): Supported 00:27:16.942 I/O Commands 00:27:16.942 ------------ 00:27:16.942 Flush (00h): Supported 00:27:16.942 Write (01h): Supported LBA-Change 00:27:16.942 Read (02h): Supported 00:27:16.942 Write Zeroes (08h): Supported LBA-Change 00:27:16.942 Dataset Management (09h): Supported 00:27:16.942 00:27:16.942 Error Log 00:27:16.942 ========= 00:27:16.942 Entry: 0 00:27:16.942 Error Count: 0x3 00:27:16.942 Submission Queue Id: 0x0 00:27:16.942 Command Id: 0x5 00:27:16.942 Phase Bit: 0 00:27:16.942 Status Code: 0x2 00:27:16.942 Status Code Type: 0x0 00:27:16.942 Do Not Retry: 1 00:27:16.942 Error Location: 0x28 00:27:16.942 LBA: 0x0 00:27:16.942 Namespace: 0x0 00:27:16.942 Vendor Log Page: 0x0 00:27:16.942 ----------- 00:27:16.942 Entry: 1 00:27:16.942 Error Count: 0x2 00:27:16.942 Submission Queue Id: 0x0 00:27:16.942 Command Id: 0x5 00:27:16.942 Phase Bit: 0 00:27:16.942 Status Code: 0x2 00:27:16.942 Status Code Type: 0x0 00:27:16.942 Do Not Retry: 1 00:27:16.942 Error Location: 0x28 00:27:16.942 LBA: 0x0 00:27:16.942 Namespace: 0x0 00:27:16.942 Vendor Log Page: 0x0 00:27:16.942 ----------- 00:27:16.942 Entry: 2 00:27:16.942 Error Count: 0x1 00:27:16.943 Submission Queue Id: 0x0 00:27:16.943 Command Id: 0x4 00:27:16.943 Phase Bit: 0 00:27:16.943 Status Code: 0x2 00:27:16.943 Status Code Type: 0x0 00:27:16.943 Do Not Retry: 1 00:27:16.943 Error Location: 0x28 00:27:16.943 LBA: 0x0 00:27:16.943 Namespace: 0x0 00:27:16.943 Vendor Log Page: 0x0 00:27:16.943 00:27:16.943 Number of Queues 00:27:16.943 ================ 00:27:16.943 Number of I/O Submission Queues: 128 00:27:16.943 Number of I/O Completion Queues: 128 00:27:16.943 00:27:16.943 ZNS Specific Controller Data 00:27:16.943 ============================ 00:27:16.943 Zone Append Size Limit: 0 00:27:16.943 00:27:16.943 00:27:16.943 Active Namespaces 00:27:16.943 ================= 00:27:16.943 get_feature(0x05) failed 00:27:16.943 Namespace ID:1 00:27:16.943 Command Set Identifier: NVM (00h) 00:27:16.943 Deallocate: Supported 00:27:16.943 Deallocated/Unwritten Error: Not Supported 00:27:16.943 Deallocated Read Value: Unknown 00:27:16.943 Deallocate in Write Zeroes: Not Supported 00:27:16.943 Deallocated Guard Field: 0xFFFF 00:27:16.943 Flush: Supported 00:27:16.943 Reservation: Not Supported 00:27:16.943 Namespace Sharing Capabilities: Multiple Controllers 00:27:16.943 Size (in LBAs): 3907029168 (1863GiB) 00:27:16.943 Capacity (in LBAs): 3907029168 (1863GiB) 00:27:16.943 Utilization (in LBAs): 3907029168 (1863GiB) 00:27:16.943 UUID: 3b88366d-42f0-4e05-800a-d5c6480de26c 00:27:16.943 Thin Provisioning: Not Supported 00:27:16.943 Per-NS Atomic Units: Yes 00:27:16.943 Atomic Boundary Size (Normal): 0 00:27:16.943 Atomic Boundary Size (PFail): 0 00:27:16.943 Atomic Boundary Offset: 0 00:27:16.943 NGUID/EUI64 Never Reused: No 00:27:16.943 ANA group ID: 1 00:27:16.943 Namespace Write Protected: No 00:27:16.943 Number of LBA Formats: 1 00:27:16.943 Current LBA Format: LBA Format #00 00:27:16.943 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:16.943 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:16.943 rmmod nvme_tcp 00:27:16.943 rmmod nvme_fabrics 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.943 00:42:43 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.479 00:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:19.479 00:42:45 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:19.479 00:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:19.479 00:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:19.479 00:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:19.479 00:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:19.479 00:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:19.479 00:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:19.479 00:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:19.479 00:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:19.479 00:42:45 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:27:22.013 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:27:22.013 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:27:22.013 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:27:22.013 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:27:22.013 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:27:22.013 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:27:22.013 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:27:22.013 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:27:22.013 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:27:22.013 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:27:22.013 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:27:22.013 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:27:22.013 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:27:22.013 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:27:22.013 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:27:22.272 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:27:23.645 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:27:23.905 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:27:24.471 00:27:24.471 real 0m18.400s 00:27:24.471 user 0m3.836s 00:27:24.471 sys 0m8.246s 00:27:24.471 00:42:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:27:24.471 00:42:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:24.471 ************************************ 00:27:24.471 END TEST nvmf_identify_kernel_target 00:27:24.471 ************************************ 00:27:24.471 00:42:50 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_auth_host /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:24.471 00:42:50 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:27:24.471 00:42:50 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:27:24.471 00:42:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:24.471 ************************************ 00:27:24.471 START TEST nvmf_auth_host 00:27:24.471 ************************************ 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:24.471 * Looking for test storage... 00:27:24.471 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:24.471 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:24.728 00:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:24.729 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:27:24.729 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:24.729 00:42:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:27:24.729 00:42:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:27:29.995 Found 0000:27:00.0 (0x8086 - 0x159b) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:27:29.995 Found 0000:27:00.1 (0x8086 - 0x159b) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:27:29.995 Found net devices under 0000:27:00.0: cvl_0_0 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:27:29.995 Found net devices under 0000:27:00.1: cvl_0_1 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:29.995 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:29.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:27:29.996 00:27:29.996 --- 10.0.0.2 ping statistics --- 00:27:29.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.996 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:29.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:27:29.996 00:27:29.996 --- 10.0.0.1 ping statistics --- 00:27:29.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.996 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:29.996 00:42:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:29.996 00:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:29.996 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:29.996 00:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@721 -- # xtrace_disable 00:27:29.996 00:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:29.996 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2152128 00:27:29.996 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2152128 00:27:29.996 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:29.996 00:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 2152128 ']' 00:27:29.996 00:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.996 00:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:29.996 00:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.996 00:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:29.996 00:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@727 -- # xtrace_disable 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=a8be2333cd9fb68c6521d047a3f97d43 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nKp 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key a8be2333cd9fb68c6521d047a3f97d43 0 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 a8be2333cd9fb68c6521d047a3f97d43 0 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=a8be2333cd9fb68c6521d047a3f97d43 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nKp 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nKp 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.nKp 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=89837a7a2cbe84021bd86889042f1c7d48afebb904274dad198e8615b5faecff 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.dqu 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 89837a7a2cbe84021bd86889042f1c7d48afebb904274dad198e8615b5faecff 3 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 89837a7a2cbe84021bd86889042f1c7d48afebb904274dad198e8615b5faecff 3 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=89837a7a2cbe84021bd86889042f1c7d48afebb904274dad198e8615b5faecff 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.dqu 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.dqu 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.dqu 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c296dfc527720574f00c4b2de7ac1fa35cdc56f6fbef810c 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.BI3 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c296dfc527720574f00c4b2de7ac1fa35cdc56f6fbef810c 0 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c296dfc527720574f00c4b2de7ac1fa35cdc56f6fbef810c 0 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c296dfc527720574f00c4b2de7ac1fa35cdc56f6fbef810c 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.BI3 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.BI3 00:27:30.957 00:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.BI3 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0d6e74b886f512845f75c154f4eefe04409dab7ce6c0f891 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.fuV 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0d6e74b886f512845f75c154f4eefe04409dab7ce6c0f891 2 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0d6e74b886f512845f75c154f4eefe04409dab7ce6c0f891 2 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0d6e74b886f512845f75c154f4eefe04409dab7ce6c0f891 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:30.958 00:42:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.fuV 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.fuV 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.fuV 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=658f718457c62f64e5daf24fd6624887 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.LBT 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 658f718457c62f64e5daf24fd6624887 1 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 658f718457c62f64e5daf24fd6624887 1 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=658f718457c62f64e5daf24fd6624887 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.LBT 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.LBT 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.LBT 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d7a77be4d76ebd123f6f1aceb60e21d9 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.65x 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d7a77be4d76ebd123f6f1aceb60e21d9 1 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d7a77be4d76ebd123f6f1aceb60e21d9 1 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d7a77be4d76ebd123f6f1aceb60e21d9 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.65x 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.65x 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.65x 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=76654f72b89269cc8833d1dc3cbe2002b2eebf03b8b4ff13 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.iVZ 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 76654f72b89269cc8833d1dc3cbe2002b2eebf03b8b4ff13 2 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 76654f72b89269cc8833d1dc3cbe2002b2eebf03b8b4ff13 2 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=76654f72b89269cc8833d1dc3cbe2002b2eebf03b8b4ff13 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:27:30.958 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:31.215 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.iVZ 00:27:31.215 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.iVZ 00:27:31.215 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.iVZ 00:27:31.215 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:31.215 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:31.215 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:31.215 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:31.215 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:27:31.215 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dee88d6c8d1ead9dde9e31eabced55c1 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.yM4 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dee88d6c8d1ead9dde9e31eabced55c1 0 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dee88d6c8d1ead9dde9e31eabced55c1 0 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dee88d6c8d1ead9dde9e31eabced55c1 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.yM4 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.yM4 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.yM4 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e4c45c8f1472ab40ad757df96ef0580c40fb1bbdb518a2f454d5be1c1ceed069 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.ef9 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e4c45c8f1472ab40ad757df96ef0580c40fb1bbdb518a2f454d5be1c1ceed069 3 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e4c45c8f1472ab40ad757df96ef0580c40fb1bbdb518a2f454d5be1c1ceed069 3 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e4c45c8f1472ab40ad757df96ef0580c40fb1bbdb518a2f454d5be1c1ceed069 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.ef9 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.ef9 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.ef9 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2152128 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@828 -- # '[' -z 2152128 ']' 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local max_retries=100 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:31.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@837 -- # xtrace_disable 00:27:31.216 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@861 -- # return 0 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nKp 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.dqu ]] 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dqu 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.BI3 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.fuV ]] 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fuV 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.LBT 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.65x ]] 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.65x 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.iVZ 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.yM4 ]] 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.yM4 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.ef9 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:31.475 00:42:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:31.476 00:42:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:27:34.011 Waiting for block devices as requested 00:27:34.011 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:27:34.270 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:34.270 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:34.528 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:34.528 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:27:34.786 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:34.786 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:27:35.044 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:35.044 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:27:35.044 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:35.302 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:27:35.302 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:27:35.559 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:27:35.559 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:27:35.817 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:35.817 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:27:36.075 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:27:36.075 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:27:37.010 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:37.010 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:37.010 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:37.010 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:27:37.010 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:37.010 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:27:37.010 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:37.010 00:43:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:37.010 00:43:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:37.269 No valid GPT data, bailing 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:27:37.269 No valid GPT data, bailing 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:27:37.269 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.1 -t tcp -s 4420 00:27:37.270 00:27:37.270 Discovery Log Number of Records 2, Generation counter 2 00:27:37.270 =====Discovery Log Entry 0====== 00:27:37.270 trtype: tcp 00:27:37.270 adrfam: ipv4 00:27:37.270 subtype: current discovery subsystem 00:27:37.270 treq: not specified, sq flow control disable supported 00:27:37.270 portid: 1 00:27:37.270 trsvcid: 4420 00:27:37.270 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:37.270 traddr: 10.0.0.1 00:27:37.270 eflags: none 00:27:37.270 sectype: none 00:27:37.270 =====Discovery Log Entry 1====== 00:27:37.270 trtype: tcp 00:27:37.270 adrfam: ipv4 00:27:37.270 subtype: nvme subsystem 00:27:37.270 treq: not specified, sq flow control disable supported 00:27:37.270 portid: 1 00:27:37.270 trsvcid: 4420 00:27:37.270 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:37.270 traddr: 10.0.0.1 00:27:37.270 eflags: none 00:27:37.270 sectype: none 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.270 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.528 nvme0n1 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:37.528 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.529 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.786 nvme0n1 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:37.786 00:43:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.044 nvme0n1 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.044 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.303 nvme0n1 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.303 nvme0n1 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.303 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.563 nvme0n1 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:38.563 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.564 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.824 nvme0n1 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:38.824 00:43:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.084 nvme0n1 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:39.084 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.085 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.343 nvme0n1 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.343 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.600 nvme0n1 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.600 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.601 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.858 nvme0n1 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:39.859 00:43:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.117 nvme0n1 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.117 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.118 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.118 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.118 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.118 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:40.118 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.118 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.377 nvme0n1 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.377 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.637 nvme0n1 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.637 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:40.897 00:43:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.157 nvme0n1 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.157 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.416 nvme0n1 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.416 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.674 nvme0n1 00:27:41.674 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.674 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:41.674 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:41.674 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.674 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.674 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.932 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:41.932 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:41.932 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.932 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.932 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.932 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:41.932 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:27:41.932 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:41.933 00:43:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.191 nvme0n1 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.191 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.757 nvme0n1 00:27:42.757 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.757 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:42.757 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.757 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:42.758 00:43:08 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.016 nvme0n1 00:27:43.016 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.016 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.016 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.016 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.016 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.016 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:43.276 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.277 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.535 nvme0n1 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:43.535 00:43:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.469 nvme0n1 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:44.469 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.037 nvme0n1 00:27:45.037 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.037 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.037 00:43:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.037 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.037 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.037 00:43:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.037 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.604 nvme0n1 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:45.604 00:43:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.168 nvme0n1 00:27:46.168 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.168 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.168 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.168 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.168 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.168 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.168 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.168 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.168 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.168 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.168 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.426 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.992 nvme0n1 00:27:46.992 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.992 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:46.992 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.992 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.992 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:46.992 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.992 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:46.992 00:43:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:46.992 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.992 00:43:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:46.992 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.251 nvme0n1 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.251 nvme0n1 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.251 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.508 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.509 nvme0n1 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.509 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.766 nvme0n1 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:47.766 00:43:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.024 nvme0n1 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.024 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.283 nvme0n1 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.283 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.541 nvme0n1 00:27:48.541 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.541 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.541 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.541 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.541 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.541 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.541 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.541 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.541 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.541 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.541 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.541 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.541 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.542 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.800 nvme0n1 00:27:48.800 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.800 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:48.801 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.060 nvme0n1 00:27:49.060 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.060 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.060 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.060 00:43:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.060 00:43:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.060 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.318 nvme0n1 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.318 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.575 nvme0n1 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.575 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.576 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.832 nvme0n1 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:49.832 00:43:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.089 nvme0n1 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.089 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.347 nvme0n1 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.347 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.606 nvme0n1 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.606 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:50.866 00:43:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.125 nvme0n1 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.125 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.126 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.694 nvme0n1 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:51.694 00:43:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.018 nvme0n1 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.018 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.584 nvme0n1 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.584 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.843 nvme0n1 00:27:52.843 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:52.843 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:52.843 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:52.843 00:43:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:52.843 00:43:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.103 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.103 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.103 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.103 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.103 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.103 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.103 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:53.103 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.103 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:53.103 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.103 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.104 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.673 nvme0n1 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:53.673 00:43:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.239 nvme0n1 00:27:54.239 00:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.239 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.239 00:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.239 00:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.239 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:54.239 00:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.239 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:54.239 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:54.239 00:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.239 00:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:54.496 00:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:54.497 00:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:54.497 00:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.497 00:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.497 00:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:54.497 00:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.497 00:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:54.497 00:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:54.497 00:43:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:54.497 00:43:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:54.497 00:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:54.497 00:43:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.064 nvme0n1 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.064 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.633 nvme0n1 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:55.633 00:43:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.568 nvme0n1 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.568 nvme0n1 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.568 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.827 nvme0n1 00:27:56.827 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.827 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.827 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:56.827 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:56.828 00:43:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.086 nvme0n1 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.086 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.345 nvme0n1 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.345 nvme0n1 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.345 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.605 nvme0n1 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.605 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.864 nvme0n1 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.864 00:43:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:57.864 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.864 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.864 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:57.864 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.122 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.123 nvme0n1 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.123 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.381 nvme0n1 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.381 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.639 nvme0n1 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.639 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:58.640 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.640 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:58.640 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:58.640 00:43:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:58.640 00:43:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:58.640 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.640 00:43:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.897 nvme0n1 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:58.897 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.157 nvme0n1 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.157 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.418 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.679 nvme0n1 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.679 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.937 nvme0n1 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.937 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:27:59.938 00:43:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.197 nvme0n1 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.197 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.763 nvme0n1 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.763 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:00.764 00:43:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.022 nvme0n1 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:28:01.022 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.023 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.594 nvme0n1 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.594 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.855 nvme0n1 00:28:01.855 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.855 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.855 00:43:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.855 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.855 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.855 00:43:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:01.855 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.855 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.855 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:01.855 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.113 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:02.114 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.114 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.371 nvme0n1 00:28:02.371 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.371 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.371 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.371 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.371 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.371 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.371 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.371 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.371 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.371 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.371 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.371 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YThiZTIzMzNjZDlmYjY4YzY1MjFkMDQ3YTNmOTdkNDPa1cH5: 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: ]] 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ODk4MzdhN2EyY2JlODQwMjFiZDg2ODg5MDQyZjFjN2Q0OGFmZWJiOTA0Mjc0ZGFkMTk4ZTg2MTViNWZhZWNmZvVjEIk=: 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:02.372 00:43:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.308 nvme0n1 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.308 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.877 nvme0n1 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjU4ZjcxODQ1N2M2MmY2NGU1ZGFmMjRmZDY2MjQ4ODdb0awm: 00:28:03.877 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: ]] 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:ZDdhNzdiZTRkNzZlYmQxMjNmNmYxYWNlYjYwZTIxZDk9t1TL: 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:03.878 00:43:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.446 nvme0n1 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzY2NTRmNzJiODkyNjljYzg4MzNkMWRjM2NiZTIwMDJiMmVlYmYwM2I4YjRmZjEzc+u4fQ==: 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: ]] 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGVlODhkNmM4ZDFlYWQ5ZGRlOWUzMWVhYmNlZDU1YzFpSxP+: 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:04.446 00:43:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.012 nvme0n1 00:28:05.012 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.012 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.012 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.012 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.012 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.012 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTRjNDVjOGYxNDcyYWI0MGFkNzU3ZGY5NmVmMDU4MGM0MGZiMWJiZGI1MThhMmY0NTRkNWJlMWMxY2VlZDA2ORVY1pk=: 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.273 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.844 nvme0n1 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:05.844 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzI5NmRmYzUyNzcyMDU3NGYwMGM0YjJkZTdhYzFmYTM1Y2RjNTZmNmZiZWY4MTBj6ckVHQ==: 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: ]] 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MGQ2ZTc0Yjg4NmY1MTI4NDVmNzVjMTU0ZjRlZWZlMDQ0MDlkYWI3Y2U2YzBmODkxg0Azlw==: 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.845 request: 00:28:05.845 { 00:28:05.845 "name": "nvme0", 00:28:05.845 "trtype": "tcp", 00:28:05.845 "traddr": "10.0.0.1", 00:28:05.845 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:05.845 "adrfam": "ipv4", 00:28:05.845 "trsvcid": "4420", 00:28:05.845 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:05.845 "method": "bdev_nvme_attach_controller", 00:28:05.845 "req_id": 1 00:28:05.845 } 00:28:05.845 Got JSON-RPC error response 00:28:05.845 response: 00:28:05.845 { 00:28:05.845 "code": -32602, 00:28:05.845 "message": "Invalid parameters" 00:28:05.845 } 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:05.845 00:43:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.104 request: 00:28:06.104 { 00:28:06.104 "name": "nvme0", 00:28:06.104 "trtype": "tcp", 00:28:06.104 "traddr": "10.0.0.1", 00:28:06.104 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:06.104 "adrfam": "ipv4", 00:28:06.104 "trsvcid": "4420", 00:28:06.104 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:06.104 "dhchap_key": "key2", 00:28:06.104 "method": "bdev_nvme_attach_controller", 00:28:06.104 "req_id": 1 00:28:06.104 } 00:28:06.104 Got JSON-RPC error response 00:28:06.104 response: 00:28:06.104 { 00:28:06.104 "code": -32602, 00:28:06.104 "message": "Invalid parameters" 00:28:06.104 } 00:28:06.104 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:06.104 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:28:06.104 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:06.104 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:06.104 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:06.104 00:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.104 00:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:06.104 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.104 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@649 -- # local es=0 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.105 request: 00:28:06.105 { 00:28:06.105 "name": "nvme0", 00:28:06.105 "trtype": "tcp", 00:28:06.105 "traddr": "10.0.0.1", 00:28:06.105 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:06.105 "adrfam": "ipv4", 00:28:06.105 "trsvcid": "4420", 00:28:06.105 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:06.105 "dhchap_key": "key1", 00:28:06.105 "dhchap_ctrlr_key": "ckey2", 00:28:06.105 "method": "bdev_nvme_attach_controller", 00:28:06.105 "req_id": 1 00:28:06.105 } 00:28:06.105 Got JSON-RPC error response 00:28:06.105 response: 00:28:06.105 { 00:28:06.105 "code": -32602, 00:28:06.105 "message": "Invalid parameters" 00:28:06.105 } 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@652 -- # es=1 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:06.105 rmmod nvme_tcp 00:28:06.105 rmmod nvme_fabrics 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2152128 ']' 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2152128 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@947 -- # '[' -z 2152128 ']' 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@951 -- # kill -0 2152128 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # uname 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:06.105 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2152128 00:28:06.362 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:28:06.362 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:28:06.362 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2152128' 00:28:06.362 killing process with pid 2152128 00:28:06.362 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # kill 2152128 00:28:06.362 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@971 -- # wait 2152128 00:28:06.620 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:06.620 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:06.620 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:06.620 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:06.620 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:06.620 00:43:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.620 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.620 00:43:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.152 00:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:09.152 00:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:09.152 00:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:09.152 00:43:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:09.152 00:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:09.152 00:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:09.152 00:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:09.153 00:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:09.153 00:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:09.153 00:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:09.153 00:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:09.153 00:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:09.153 00:43:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:28:11.684 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:11.684 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:11.684 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:11.684 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:28:11.684 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:11.684 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:28:11.684 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:11.684 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:28:11.684 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:11.684 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:28:11.684 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:28:11.945 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:28:11.945 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:11.945 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:28:11.945 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:11.945 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:28:13.320 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:28:13.889 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:28:14.146 00:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.nKp /tmp/spdk.key-null.BI3 /tmp/spdk.key-sha256.LBT /tmp/spdk.key-sha384.iVZ /tmp/spdk.key-sha512.ef9 /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log 00:28:14.146 00:43:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:28:16.676 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:16.676 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:16.676 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:16.676 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:16.676 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:16.676 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:16.676 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:16.676 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:16.676 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:16.676 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:16.676 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:16.676 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:16.676 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:16.676 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:16.676 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:16.676 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:16.676 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:28:16.676 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:28:17.304 00:28:17.304 real 0m52.641s 00:28:17.304 user 0m43.728s 00:28:17.304 sys 0m12.384s 00:28:17.304 00:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # xtrace_disable 00:28:17.304 00:43:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.304 ************************************ 00:28:17.304 END TEST nvmf_auth_host 00:28:17.304 ************************************ 00:28:17.304 00:43:43 nvmf_tcp -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:28:17.304 00:43:43 nvmf_tcp -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:17.304 00:43:43 nvmf_tcp -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:28:17.304 00:43:43 nvmf_tcp -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:17.304 00:43:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:17.304 ************************************ 00:28:17.304 START TEST nvmf_digest 00:28:17.304 ************************************ 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:17.304 * Looking for test storage... 00:28:17.304 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:28:17.304 00:43:43 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:28:22.580 Found 0000:27:00.0 (0x8086 - 0x159b) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:28:22.580 Found 0000:27:00.1 (0x8086 - 0x159b) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:28:22.580 Found net devices under 0000:27:00.0: cvl_0_0 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:28:22.580 Found net devices under 0000:27:00.1: cvl_0_1 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.580 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:28:22.581 00:28:22.581 --- 10.0.0.2 ping statistics --- 00:28:22.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.581 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:28:22.581 00:28:22.581 --- 10.0.0.1 ping statistics --- 00:28:22.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.581 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.581 00:43:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 1 -eq 1 ]] 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest -- host/digest.sh@142 -- # run_test nvmf_digest_dsa_initiator run_digest dsa_initiator 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:22.841 ************************************ 00:28:22.841 START TEST nvmf_digest_dsa_initiator 00:28:22.841 ************************************ 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@1122 -- # run_digest dsa_initiator 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@120 -- # local dsa_initiator 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@121 -- # [[ dsa_initiator == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@121 -- # dsa_initiator=true 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@721 -- # xtrace_disable 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@481 -- # nvmfpid=2167948 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@482 -- # waitforlisten 2167948 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@828 -- # '[' -z 2167948 ']' 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:22.841 00:43:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:22.841 [2024-05-15 00:43:48.870472] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:28:22.841 [2024-05-15 00:43:48.870585] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.841 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.841 [2024-05-15 00:43:48.998223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:23.100 [2024-05-15 00:43:49.096159] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.100 [2024-05-15 00:43:49.096195] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.100 [2024-05-15 00:43:49.096206] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.100 [2024-05-15 00:43:49.096220] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.100 [2024-05-15 00:43:49.096228] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.100 [2024-05-15 00:43:49.096258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@861 -- # return 0 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@727 -- # xtrace_disable 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@125 -- # [[ dsa_initiator == \d\s\a\_\t\a\r\g\e\t ]] 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@126 -- # common_target_config 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@43 -- # rpc_cmd 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@560 -- # xtrace_disable 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:23.666 null0 00:28:23.666 [2024-05-15 00:43:49.770062] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.666 [2024-05-15 00:43:49.793969] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:28:23.666 [2024-05-15 00:43:49.794261] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@128 -- # run_bperf randread 4096 128 true 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # rw=randread 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # bs=4096 00:28:23.666 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # qd=128 00:28:23.667 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # scan_dsa=true 00:28:23.667 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@83 -- # bperfpid=2168225 00:28:23.667 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@84 -- # waitforlisten 2168225 /var/tmp/bperf.sock 00:28:23.667 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@828 -- # '[' -z 2168225 ']' 00:28:23.667 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:23.667 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:23.667 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:23.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:23.667 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:23.667 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:23.667 00:43:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:23.924 [2024-05-15 00:43:49.872050] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:28:23.924 [2024-05-15 00:43:49.872157] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168225 ] 00:28:23.924 EAL: No free 2048 kB hugepages reported on node 1 00:28:23.924 [2024-05-15 00:43:50.007336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:24.182 [2024-05-15 00:43:50.153117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:24.441 00:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:24.441 00:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@861 -- # return 0 00:28:24.441 00:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # true 00:28:24.441 00:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:28:24.441 00:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:28:24.700 [2024-05-15 00:43:50.709934] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:28:24.700 00:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:24.700 00:43:50 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:31.262 00:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.262 00:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:31.262 nvme0n1 00:28:31.262 00:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:31.262 00:43:57 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:31.262 Running I/O for 2 seconds... 00:28:33.792 00:28:33.792 Latency(us) 00:28:33.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.792 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:33.792 nvme0n1 : 2.00 21725.18 84.86 0.00 0.00 5883.96 2966.37 16625.45 00:28:33.792 =================================================================================================================== 00:28:33.792 Total : 21725.18 84.86 0.00 0.00 5883.96 2966.37 16625.45 00:28:33.792 0 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # get_accel_stats 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:33.792 | select(.opcode=="crc32c") 00:28:33.792 | "\(.module_name) \(.executed)"' 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # true 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # exp_module=dsa 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@98 -- # killprocess 2168225 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@947 -- # '[' -z 2168225 ']' 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # kill -0 2168225 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # uname 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2168225 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2168225' 00:28:33.792 killing process with pid 2168225 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@966 -- # kill 2168225 00:28:33.792 Received shutdown signal, test time was about 2.000000 seconds 00:28:33.792 00:28:33.792 Latency(us) 00:28:33.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:33.792 =================================================================================================================== 00:28:33.792 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:33.792 00:43:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@971 -- # wait 2168225 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@129 -- # run_bperf randread 131072 16 true 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # rw=randread 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # bs=131072 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # qd=16 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # scan_dsa=true 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@83 -- # bperfpid=2170353 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@84 -- # waitforlisten 2170353 /var/tmp/bperf.sock 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@828 -- # '[' -z 2170353 ']' 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:35.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:35.693 00:44:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:35.693 [2024-05-15 00:44:01.504634] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:28:35.693 [2024-05-15 00:44:01.504718] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2170353 ] 00:28:35.693 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:35.693 Zero copy mechanism will not be used. 00:28:35.693 EAL: No free 2048 kB hugepages reported on node 1 00:28:35.693 [2024-05-15 00:44:01.586657] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.693 [2024-05-15 00:44:01.683638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:36.262 00:44:02 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:36.262 00:44:02 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@861 -- # return 0 00:28:36.262 00:44:02 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # true 00:28:36.262 00:44:02 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:28:36.262 00:44:02 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:28:36.262 [2024-05-15 00:44:02.340179] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:28:36.262 00:44:02 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:36.262 00:44:02 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:42.826 00:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.826 00:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:42.826 nvme0n1 00:28:42.826 00:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:42.826 00:44:08 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:42.826 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:42.826 Zero copy mechanism will not be used. 00:28:42.826 Running I/O for 2 seconds... 00:28:45.359 00:28:45.359 Latency(us) 00:28:45.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.359 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:45.359 nvme0n1 : 2.00 6831.85 853.98 0.00 0.00 2339.05 502.30 4225.35 00:28:45.359 =================================================================================================================== 00:28:45.359 Total : 6831.85 853.98 0.00 0.00 2339.05 502.30 4225.35 00:28:45.359 0 00:28:45.359 00:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:45.359 00:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # get_accel_stats 00:28:45.359 00:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:45.359 00:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:45.359 00:44:10 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:45.359 | select(.opcode=="crc32c") 00:28:45.359 | "\(.module_name) \(.executed)"' 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # true 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # exp_module=dsa 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@98 -- # killprocess 2170353 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@947 -- # '[' -z 2170353 ']' 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # kill -0 2170353 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # uname 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2170353 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2170353' 00:28:45.359 killing process with pid 2170353 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@966 -- # kill 2170353 00:28:45.359 Received shutdown signal, test time was about 2.000000 seconds 00:28:45.359 00:28:45.359 Latency(us) 00:28:45.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.359 =================================================================================================================== 00:28:45.359 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.359 00:44:11 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@971 -- # wait 2170353 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 true 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # rw=randwrite 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # bs=4096 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # qd=128 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # scan_dsa=true 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@83 -- # bperfpid=2172559 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@84 -- # waitforlisten 2172559 /var/tmp/bperf.sock 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@828 -- # '[' -z 2172559 ']' 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:47.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:47.263 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:47.263 [2024-05-15 00:44:13.145103] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:28:47.263 [2024-05-15 00:44:13.145252] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172559 ] 00:28:47.263 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.263 [2024-05-15 00:44:13.278896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:47.263 [2024-05-15 00:44:13.377312] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.828 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:47.828 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@861 -- # return 0 00:28:47.828 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # true 00:28:47.828 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:28:47.828 00:44:13 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:28:48.086 [2024-05-15 00:44:13.997948] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:28:48.086 00:44:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:48.086 00:44:14 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:54.643 00:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.643 00:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.643 nvme0n1 00:28:54.643 00:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:54.643 00:44:20 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:54.643 Running I/O for 2 seconds... 00:28:56.546 00:28:56.546 Latency(us) 00:28:56.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.546 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:56.546 nvme0n1 : 2.00 27514.10 107.48 0.00 0.00 4641.38 1897.09 11313.58 00:28:56.546 =================================================================================================================== 00:28:56.546 Total : 27514.10 107.48 0.00 0.00 4641.38 1897.09 11313.58 00:28:56.546 0 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # get_accel_stats 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:56.546 | select(.opcode=="crc32c") 00:28:56.546 | "\(.module_name) \(.executed)"' 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # true 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # exp_module=dsa 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@98 -- # killprocess 2172559 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@947 -- # '[' -z 2172559 ']' 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # kill -0 2172559 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # uname 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:28:56.546 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2172559 00:28:56.806 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:28:56.806 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:28:56.806 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2172559' 00:28:56.806 killing process with pid 2172559 00:28:56.806 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@966 -- # kill 2172559 00:28:56.806 Received shutdown signal, test time was about 2.000000 seconds 00:28:56.806 00:28:56.806 Latency(us) 00:28:56.806 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.806 =================================================================================================================== 00:28:56.806 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.806 00:44:22 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@971 -- # wait 2172559 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 true 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # rw=randwrite 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # bs=131072 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # qd=16 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@80 -- # scan_dsa=true 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@83 -- # bperfpid=2174838 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@84 -- # waitforlisten 2174838 /var/tmp/bperf.sock 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@828 -- # '[' -z 2174838 ']' 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@833 -- # local max_retries=100 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:58.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@837 -- # xtrace_disable 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:28:58.806 00:44:24 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:58.806 [2024-05-15 00:44:24.728437] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:28:58.806 [2024-05-15 00:44:24.728565] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2174838 ] 00:28:58.806 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:58.806 Zero copy mechanism will not be used. 00:28:58.806 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.806 [2024-05-15 00:44:24.838913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:58.806 [2024-05-15 00:44:24.935630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:59.376 00:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:28:59.376 00:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@861 -- # return 0 00:28:59.376 00:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # true 00:28:59.376 00:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:28:59.376 00:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:28:59.636 [2024-05-15 00:44:25.564152] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:28:59.636 00:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:59.636 00:44:25 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:06.200 00:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.200 00:44:31 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.200 nvme0n1 00:29:06.200 00:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:06.200 00:44:32 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:06.200 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:06.200 Zero copy mechanism will not be used. 00:29:06.200 Running I/O for 2 seconds... 00:29:08.732 00:29:08.732 Latency(us) 00:29:08.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.732 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:08.732 nvme0n1 : 2.00 7771.08 971.39 0.00 0.00 2055.47 1129.63 4828.97 00:29:08.732 =================================================================================================================== 00:29:08.732 Total : 7771.08 971.39 0.00 0.00 2055.47 1129.63 4828.97 00:29:08.732 0 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@93 -- # get_accel_stats 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:08.732 | select(.opcode=="crc32c") 00:29:08.732 | "\(.module_name) \(.executed)"' 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # true 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@94 -- # exp_module=dsa 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@98 -- # killprocess 2174838 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@947 -- # '[' -z 2174838 ']' 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # kill -0 2174838 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # uname 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2174838 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2174838' 00:29:08.732 killing process with pid 2174838 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@966 -- # kill 2174838 00:29:08.732 Received shutdown signal, test time was about 2.000000 seconds 00:29:08.732 00:29:08.732 Latency(us) 00:29:08.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.732 =================================================================================================================== 00:29:08.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.732 00:44:34 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@971 -- # wait 2174838 00:29:10.637 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- host/digest.sh@132 -- # killprocess 2167948 00:29:10.637 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@947 -- # '[' -z 2167948 ']' 00:29:10.637 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@951 -- # kill -0 2167948 00:29:10.637 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # uname 00:29:10.637 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:10.637 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2167948 00:29:10.637 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:10.637 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:10.637 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2167948' 00:29:10.637 killing process with pid 2167948 00:29:10.637 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@966 -- # kill 2167948 00:29:10.637 [2024-05-15 00:44:36.477072] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:10.637 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@971 -- # wait 2167948 00:29:10.895 00:29:10.895 real 0m48.155s 00:29:10.895 user 1m8.463s 00:29:10.895 sys 0m3.931s 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_initiator -- common/autotest_common.sh@10 -- # set +x 00:29:10.895 ************************************ 00:29:10.895 END TEST nvmf_digest_dsa_initiator 00:29:10.895 ************************************ 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest -- host/digest.sh@143 -- # run_test nvmf_digest_dsa_target run_digest dsa_target 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:10.895 ************************************ 00:29:10.895 START TEST nvmf_digest_dsa_target 00:29:10.895 ************************************ 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@1122 -- # run_digest dsa_target 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@120 -- # local dsa_initiator 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@121 -- # [[ dsa_target == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@121 -- # dsa_initiator=false 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:10.895 00:44:36 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:29:10.895 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@481 -- # nvmfpid=2177251 00:29:10.895 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@482 -- # waitforlisten 2177251 00:29:10.895 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@828 -- # '[' -z 2177251 ']' 00:29:10.895 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.895 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:10.895 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.895 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:10.895 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:29:10.895 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:11.152 [2024-05-15 00:44:37.069757] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:29:11.152 [2024-05-15 00:44:37.069822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.152 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.153 [2024-05-15 00:44:37.160376] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.153 [2024-05-15 00:44:37.258516] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.153 [2024-05-15 00:44:37.258556] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.153 [2024-05-15 00:44:37.258566] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.153 [2024-05-15 00:44:37.258575] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.153 [2024-05-15 00:44:37.258582] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.153 [2024-05-15 00:44:37.258614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@861 -- # return 0 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@125 -- # [[ dsa_target == \d\s\a\_\t\a\r\g\e\t ]] 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@125 -- # rpc_cmd dsa_scan_accel_module 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:29:11.719 [2024-05-15 00:44:37.799077] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@126 -- # common_target_config 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@43 -- # rpc_cmd 00:29:11.719 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:11.720 00:44:37 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:29:18.292 null0 00:29:18.292 [2024-05-15 00:44:43.903674] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:18.292 [2024-05-15 00:44:43.930289] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:18.292 [2024-05-15 00:44:43.930587] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # rw=randread 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # bs=4096 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # qd=128 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # scan_dsa=false 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@83 -- # bperfpid=2178468 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@84 -- # waitforlisten 2178468 /var/tmp/bperf.sock 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@828 -- # '[' -z 2178468 ']' 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:18.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:18.292 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:18.293 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:29:18.293 00:44:43 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:18.293 [2024-05-15 00:44:44.009457] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:29:18.293 [2024-05-15 00:44:44.009579] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178468 ] 00:29:18.293 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.293 [2024-05-15 00:44:44.149573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.293 [2024-05-15 00:44:44.304406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:18.550 00:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:18.550 00:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@861 -- # return 0 00:29:18.550 00:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@86 -- # false 00:29:18.550 00:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:18.550 00:44:44 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:19.118 00:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.118 00:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.391 nvme0n1 00:29:19.391 00:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:19.391 00:44:45 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:19.391 Running I/O for 2 seconds... 00:29:21.295 00:29:21.295 Latency(us) 00:29:21.295 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.295 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:21.295 nvme0n1 : 2.00 22066.87 86.20 0.00 0.00 5794.52 2379.99 17315.30 00:29:21.295 =================================================================================================================== 00:29:21.295 Total : 22066.87 86.20 0.00 0.00 5794.52 2379.99 17315.30 00:29:21.295 0 00:29:21.295 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:21.295 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # get_accel_stats 00:29:21.295 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:21.295 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:21.295 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:21.295 | select(.opcode=="crc32c") 00:29:21.295 | "\(.module_name) \(.executed)"' 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # false 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # exp_module=software 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@98 -- # killprocess 2178468 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@947 -- # '[' -z 2178468 ']' 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # kill -0 2178468 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # uname 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2178468 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2178468' 00:29:21.555 killing process with pid 2178468 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@966 -- # kill 2178468 00:29:21.555 Received shutdown signal, test time was about 2.000000 seconds 00:29:21.555 00:29:21.555 Latency(us) 00:29:21.555 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.555 =================================================================================================================== 00:29:21.555 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.555 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@971 -- # wait 2178468 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # rw=randread 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # bs=131072 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # qd=16 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # scan_dsa=false 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@83 -- # bperfpid=2179359 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@84 -- # waitforlisten 2179359 /var/tmp/bperf.sock 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@828 -- # '[' -z 2179359 ']' 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:21.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:29:21.815 00:44:47 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:22.075 [2024-05-15 00:44:48.036497] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:29:22.075 [2024-05-15 00:44:48.036627] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179359 ] 00:29:22.076 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:22.076 Zero copy mechanism will not be used. 00:29:22.076 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.076 [2024-05-15 00:44:48.151923] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.333 [2024-05-15 00:44:48.248990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.591 00:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:22.591 00:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@861 -- # return 0 00:29:22.591 00:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@86 -- # false 00:29:22.591 00:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:22.591 00:44:48 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:23.156 00:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.156 00:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.156 nvme0n1 00:29:23.156 00:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:23.156 00:44:49 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:23.156 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:23.156 Zero copy mechanism will not be used. 00:29:23.156 Running I/O for 2 seconds... 00:29:25.692 00:29:25.692 Latency(us) 00:29:25.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.692 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:25.692 nvme0n1 : 2.00 7277.78 909.72 0.00 0.00 2195.54 398.82 8588.67 00:29:25.692 =================================================================================================================== 00:29:25.692 Total : 7277.78 909.72 0.00 0.00 2195.54 398.82 8588.67 00:29:25.692 0 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # get_accel_stats 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:25.692 | select(.opcode=="crc32c") 00:29:25.692 | "\(.module_name) \(.executed)"' 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # false 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # exp_module=software 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@98 -- # killprocess 2179359 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@947 -- # '[' -z 2179359 ']' 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # kill -0 2179359 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # uname 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2179359 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2179359' 00:29:25.692 killing process with pid 2179359 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@966 -- # kill 2179359 00:29:25.692 Received shutdown signal, test time was about 2.000000 seconds 00:29:25.692 00:29:25.692 Latency(us) 00:29:25.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.692 =================================================================================================================== 00:29:25.692 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.692 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@971 -- # wait 2179359 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # rw=randwrite 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # bs=4096 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # qd=128 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # scan_dsa=false 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@83 -- # bperfpid=2179987 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@84 -- # waitforlisten 2179987 /var/tmp/bperf.sock 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@828 -- # '[' -z 2179987 ']' 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:25.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:29:25.951 00:44:51 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:25.951 [2024-05-15 00:44:51.933354] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:29:25.951 [2024-05-15 00:44:51.933471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179987 ] 00:29:25.951 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.951 [2024-05-15 00:44:52.047437] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.211 [2024-05-15 00:44:52.144565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.469 00:44:52 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:26.469 00:44:52 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@861 -- # return 0 00:29:26.469 00:44:52 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@86 -- # false 00:29:26.469 00:44:52 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:26.469 00:44:52 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:27.032 00:44:52 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.032 00:44:52 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.032 nvme0n1 00:29:27.032 00:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:27.032 00:44:53 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:27.290 Running I/O for 2 seconds... 00:29:29.192 00:29:29.192 Latency(us) 00:29:29.192 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.192 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:29.192 nvme0n1 : 2.00 25622.25 100.09 0.00 0.00 4986.21 2017.82 7277.95 00:29:29.192 =================================================================================================================== 00:29:29.192 Total : 25622.25 100.09 0.00 0.00 4986.21 2017.82 7277.95 00:29:29.192 0 00:29:29.192 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:29.192 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # get_accel_stats 00:29:29.192 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:29.192 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:29.192 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:29.192 | select(.opcode=="crc32c") 00:29:29.192 | "\(.module_name) \(.executed)"' 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # false 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # exp_module=software 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@98 -- # killprocess 2179987 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@947 -- # '[' -z 2179987 ']' 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # kill -0 2179987 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # uname 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2179987 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2179987' 00:29:29.450 killing process with pid 2179987 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@966 -- # kill 2179987 00:29:29.450 Received shutdown signal, test time was about 2.000000 seconds 00:29:29.450 00:29:29.450 Latency(us) 00:29:29.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.450 =================================================================================================================== 00:29:29.450 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.450 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@971 -- # wait 2179987 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # rw=randwrite 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # bs=131072 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # qd=16 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@80 -- # scan_dsa=false 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@83 -- # bperfpid=2180880 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@84 -- # waitforlisten 2180880 /var/tmp/bperf.sock 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@828 -- # '[' -z 2180880 ']' 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:29.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:29.709 00:44:55 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:29:29.709 [2024-05-15 00:44:55.833020] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:29:29.709 [2024-05-15 00:44:55.833140] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180880 ] 00:29:29.709 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:29.709 Zero copy mechanism will not be used. 00:29:30.035 EAL: No free 2048 kB hugepages reported on node 1 00:29:30.035 [2024-05-15 00:44:55.944846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.035 [2024-05-15 00:44:56.040417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.603 00:44:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:30.603 00:44:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@861 -- # return 0 00:29:30.603 00:44:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@86 -- # false 00:29:30.603 00:44:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:30.603 00:44:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:30.860 00:44:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.860 00:44:56 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:31.117 nvme0n1 00:29:31.117 00:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:31.117 00:44:57 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:31.117 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:31.117 Zero copy mechanism will not be used. 00:29:31.117 Running I/O for 2 seconds... 00:29:33.645 00:29:33.645 Latency(us) 00:29:33.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.645 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:33.645 nvme0n1 : 2.00 7919.52 989.94 0.00 0.00 2016.09 1138.26 11796.48 00:29:33.645 =================================================================================================================== 00:29:33.645 Total : 7919.52 989.94 0.00 0.00 2016.09 1138.26 11796.48 00:29:33.645 0 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@93 -- # get_accel_stats 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:33.645 | select(.opcode=="crc32c") 00:29:33.645 | "\(.module_name) \(.executed)"' 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # false 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@94 -- # exp_module=software 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@98 -- # killprocess 2180880 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@947 -- # '[' -z 2180880 ']' 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # kill -0 2180880 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # uname 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2180880 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2180880' 00:29:33.645 killing process with pid 2180880 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@966 -- # kill 2180880 00:29:33.645 Received shutdown signal, test time was about 2.000000 seconds 00:29:33.645 00:29:33.645 Latency(us) 00:29:33.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.645 =================================================================================================================== 00:29:33.645 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@971 -- # wait 2180880 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- host/digest.sh@132 -- # killprocess 2177251 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@947 -- # '[' -z 2177251 ']' 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@951 -- # kill -0 2177251 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # uname 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2177251 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2177251' 00:29:33.645 killing process with pid 2177251 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@966 -- # kill 2177251 00:29:33.645 [2024-05-15 00:44:59.797159] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:33.645 00:44:59 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@971 -- # wait 2177251 00:29:36.179 00:29:36.179 real 0m24.824s 00:29:36.179 user 0m33.800s 00:29:36.179 sys 0m3.633s 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_dsa_target -- common/autotest_common.sh@10 -- # set +x 00:29:36.179 ************************************ 00:29:36.179 END TEST nvmf_digest_dsa_target 00:29:36.179 ************************************ 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:36.179 ************************************ 00:29:36.179 START TEST nvmf_digest_error 00:29:36.179 ************************************ 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1122 -- # run_digest_error 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2182223 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2182223 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2182223 ']' 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:36.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.179 00:45:01 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:36.179 [2024-05-15 00:45:01.967928] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:29:36.179 [2024-05-15 00:45:01.968027] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:36.179 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.179 [2024-05-15 00:45:02.089635] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.179 [2024-05-15 00:45:02.188150] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:36.179 [2024-05-15 00:45:02.188186] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:36.179 [2024-05-15 00:45:02.188199] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:36.179 [2024-05-15 00:45:02.188208] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:36.179 [2024-05-15 00:45:02.188216] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:36.179 [2024-05-15 00:45:02.188247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.746 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:36.746 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:29:36.746 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:36.746 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:36.746 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.747 [2024-05-15 00:45:02.684743] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.747 null0 00:29:36.747 [2024-05-15 00:45:02.859906] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:36.747 [2024-05-15 00:45:02.883854] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:36.747 [2024-05-15 00:45:02.884191] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2182278 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2182278 /var/tmp/bperf.sock 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2182278 ']' 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:36.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.747 00:45:02 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:37.004 [2024-05-15 00:45:02.963741] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:29:37.004 [2024-05-15 00:45:02.963857] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182278 ] 00:29:37.004 EAL: No free 2048 kB hugepages reported on node 1 00:29:37.004 [2024-05-15 00:45:03.078262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:37.261 [2024-05-15 00:45:03.175357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:37.528 00:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:37.529 00:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:29:37.529 00:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:37.529 00:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:37.795 00:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:37.795 00:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:37.795 00:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:37.795 00:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:37.795 00:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.795 00:45:03 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.054 nvme0n1 00:29:38.054 00:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:38.054 00:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:38.054 00:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.313 00:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:38.313 00:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:38.313 00:45:04 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:38.313 Running I/O for 2 seconds... 00:29:38.313 [2024-05-15 00:45:04.307555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.313 [2024-05-15 00:45:04.307604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:12338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.313 [2024-05-15 00:45:04.307618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.313 [2024-05-15 00:45:04.316624] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.313 [2024-05-15 00:45:04.316658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:8656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.313 [2024-05-15 00:45:04.316671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.313 [2024-05-15 00:45:04.327140] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.313 [2024-05-15 00:45:04.327170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.313 [2024-05-15 00:45:04.327186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.313 [2024-05-15 00:45:04.339522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.313 [2024-05-15 00:45:04.339555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.313 [2024-05-15 00:45:04.339566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.313 [2024-05-15 00:45:04.348147] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.313 [2024-05-15 00:45:04.348185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.313 [2024-05-15 00:45:04.348197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.314 [2024-05-15 00:45:04.359871] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.314 [2024-05-15 00:45:04.359901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.314 [2024-05-15 00:45:04.359912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.314 [2024-05-15 00:45:04.369100] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.314 [2024-05-15 00:45:04.369129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.314 [2024-05-15 00:45:04.369139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.314 [2024-05-15 00:45:04.380654] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.314 [2024-05-15 00:45:04.380683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:15790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.314 [2024-05-15 00:45:04.380694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.314 [2024-05-15 00:45:04.392075] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.314 [2024-05-15 00:45:04.392103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:11400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.314 [2024-05-15 00:45:04.392113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.314 [2024-05-15 00:45:04.401544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.314 [2024-05-15 00:45:04.401583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.314 [2024-05-15 00:45:04.401594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.314 [2024-05-15 00:45:04.410291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.314 [2024-05-15 00:45:04.410321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.314 [2024-05-15 00:45:04.410337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.314 [2024-05-15 00:45:04.420249] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.314 [2024-05-15 00:45:04.420294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.314 [2024-05-15 00:45:04.420304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.314 [2024-05-15 00:45:04.428804] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.314 [2024-05-15 00:45:04.428832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.314 [2024-05-15 00:45:04.428842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.314 [2024-05-15 00:45:04.439589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.314 [2024-05-15 00:45:04.439617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.314 [2024-05-15 00:45:04.439629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.314 [2024-05-15 00:45:04.452301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.314 [2024-05-15 00:45:04.452343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.314 [2024-05-15 00:45:04.452353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.314 [2024-05-15 00:45:04.461445] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.314 [2024-05-15 00:45:04.461490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:3170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.314 [2024-05-15 00:45:04.461503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.314 [2024-05-15 00:45:04.474544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.314 [2024-05-15 00:45:04.474584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.314 [2024-05-15 00:45:04.474595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.484938] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.484971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.484981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.493990] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.494028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.494039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.506282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.506312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.506329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.517586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.517615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.517625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.526134] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.526163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.526173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.537689] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.537718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.537728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.551012] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.551042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.551052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.563604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.563635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.563646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.572614] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.572643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:21663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.572655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.583845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.583879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.583891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.593066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.593097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10754 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.593110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.602791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.602821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.602832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.614649] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.614681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.614692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.622859] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.622888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:19680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.622899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.634481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.634511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.634521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.645588] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.586 [2024-05-15 00:45:04.645616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.586 [2024-05-15 00:45:04.645626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.586 [2024-05-15 00:45:04.653772] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.587 [2024-05-15 00:45:04.653800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.587 [2024-05-15 00:45:04.653809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.587 [2024-05-15 00:45:04.664557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.587 [2024-05-15 00:45:04.664588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.587 [2024-05-15 00:45:04.664599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.587 [2024-05-15 00:45:04.673355] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.587 [2024-05-15 00:45:04.673385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.587 [2024-05-15 00:45:04.673395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.587 [2024-05-15 00:45:04.683896] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.587 [2024-05-15 00:45:04.683927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.587 [2024-05-15 00:45:04.683944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.587 [2024-05-15 00:45:04.694219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.587 [2024-05-15 00:45:04.694253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.587 [2024-05-15 00:45:04.694266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.587 [2024-05-15 00:45:04.704460] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.587 [2024-05-15 00:45:04.704491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.587 [2024-05-15 00:45:04.704503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.587 [2024-05-15 00:45:04.714160] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.587 [2024-05-15 00:45:04.714194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.587 [2024-05-15 00:45:04.714207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.587 [2024-05-15 00:45:04.723030] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.587 [2024-05-15 00:45:04.723062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.587 [2024-05-15 00:45:04.723074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.587 [2024-05-15 00:45:04.732579] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.587 [2024-05-15 00:45:04.732612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.587 [2024-05-15 00:45:04.732625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.587 [2024-05-15 00:45:04.742295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.587 [2024-05-15 00:45:04.742326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.587 [2024-05-15 00:45:04.742336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.846 [2024-05-15 00:45:04.752569] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.846 [2024-05-15 00:45:04.752599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.846 [2024-05-15 00:45:04.752609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.846 [2024-05-15 00:45:04.762558] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.846 [2024-05-15 00:45:04.762587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.846 [2024-05-15 00:45:04.762599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.846 [2024-05-15 00:45:04.771273] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.846 [2024-05-15 00:45:04.771302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:22677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.846 [2024-05-15 00:45:04.771312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.846 [2024-05-15 00:45:04.782816] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.846 [2024-05-15 00:45:04.782845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.846 [2024-05-15 00:45:04.782855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.846 [2024-05-15 00:45:04.794097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.846 [2024-05-15 00:45:04.794129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.846 [2024-05-15 00:45:04.794141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.846 [2024-05-15 00:45:04.802984] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.846 [2024-05-15 00:45:04.803013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.846 [2024-05-15 00:45:04.803024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.846 [2024-05-15 00:45:04.813464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.846 [2024-05-15 00:45:04.813492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.846 [2024-05-15 00:45:04.813503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.846 [2024-05-15 00:45:04.825507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.846 [2024-05-15 00:45:04.825537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.846 [2024-05-15 00:45:04.825547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.846 [2024-05-15 00:45:04.833790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.846 [2024-05-15 00:45:04.833818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:20706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.846 [2024-05-15 00:45:04.833829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.846 [2024-05-15 00:45:04.844168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.846 [2024-05-15 00:45:04.844195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.844206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.855845] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.855887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.855902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.866771] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.866804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.866815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.875455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.875485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.875495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.885408] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.885436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.885446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.897142] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.897175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.897186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.907352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.907381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.907392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.916590] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.916619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.916630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.926447] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.926481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.926492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.935770] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.935799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.935809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.945484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.945514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.945524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.955728] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.955761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.955775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.964487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.964518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.964528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.973891] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.973920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.973931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.984236] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.984267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:24276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.984278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:04.992633] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:04.992662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:04.992672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.847 [2024-05-15 00:45:05.003750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:38.847 [2024-05-15 00:45:05.003778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.847 [2024-05-15 00:45:05.003788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.016319] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.016350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:12227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.016360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.029323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.029352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.029367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.040316] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.040349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.040360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.050482] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.050512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.050523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.059518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.059545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.059559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.071066] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.071095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.071104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.082734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.082764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2485 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.082775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.093197] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.093227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.093237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.101926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.101954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.101965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.113255] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.113292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.113307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.125715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.125748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:7040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.125759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.134597] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.134627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.134639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.144471] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.144500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1601 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.144511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.155760] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.155788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.155799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.165148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.165176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.165186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.176504] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.176533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.176543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.184820] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.184847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:17270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.184857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.196291] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.196321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.196332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.204448] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.204477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.204492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.216329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.216357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.216368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.227439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.227470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.227482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.236009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.236038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:9606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.106 [2024-05-15 00:45:05.236048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.106 [2024-05-15 00:45:05.247149] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.106 [2024-05-15 00:45:05.247177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.107 [2024-05-15 00:45:05.247188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.107 [2024-05-15 00:45:05.257367] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.107 [2024-05-15 00:45:05.257397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.107 [2024-05-15 00:45:05.257408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.107 [2024-05-15 00:45:05.266717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.107 [2024-05-15 00:45:05.266750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.107 [2024-05-15 00:45:05.266762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.364 [2024-05-15 00:45:05.279384] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.364 [2024-05-15 00:45:05.279411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.364 [2024-05-15 00:45:05.279421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.364 [2024-05-15 00:45:05.290235] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.364 [2024-05-15 00:45:05.290262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.364 [2024-05-15 00:45:05.290273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.364 [2024-05-15 00:45:05.298195] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.364 [2024-05-15 00:45:05.298229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:8322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.364 [2024-05-15 00:45:05.298241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.364 [2024-05-15 00:45:05.310705] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.364 [2024-05-15 00:45:05.310735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.364 [2024-05-15 00:45:05.310746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.364 [2024-05-15 00:45:05.321719] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.321747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21621 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.321759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.330561] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.330590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.330601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.341028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.341058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:8530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.341076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.353318] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.353345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.353356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.363263] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.363292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.363306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.371795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.371822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:11819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.371832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.381802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.381834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.381851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.392013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.392045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.392058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.402836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.402865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:10858 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.402876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.411694] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.411721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.411732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.420625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.420653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.420673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.429829] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.429856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.429866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.440481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.440508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.440518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.451621] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.451651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.451663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.460599] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.460626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.460636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.472229] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.472261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.472272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.483750] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.483777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.483787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.495118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.495152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.495164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.504057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.504086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.504097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.515438] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.515467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:15488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.515477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.365 [2024-05-15 00:45:05.525739] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.365 [2024-05-15 00:45:05.525769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.365 [2024-05-15 00:45:05.525779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.534350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.534379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.534391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.545088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.545118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.545128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.554678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.554705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.554717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.567201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.567228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.567239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.577449] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.577476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.577486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.585572] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.585600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.585611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.596402] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.596430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.596441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.606854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.606881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:23689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.606891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.615813] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.615843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.615855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.626827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.626855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:15625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.626865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.637015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.637040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.637050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.648061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.648093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.648104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.656500] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.656527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.656539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.665713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.665743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.665754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.675759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.622 [2024-05-15 00:45:05.675785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.622 [2024-05-15 00:45:05.675795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.622 [2024-05-15 00:45:05.685206] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.623 [2024-05-15 00:45:05.685236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.623 [2024-05-15 00:45:05.685248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.623 [2024-05-15 00:45:05.693688] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.623 [2024-05-15 00:45:05.693717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.623 [2024-05-15 00:45:05.693728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.623 [2024-05-15 00:45:05.704053] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.623 [2024-05-15 00:45:05.704081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.623 [2024-05-15 00:45:05.704092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.623 [2024-05-15 00:45:05.714357] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.623 [2024-05-15 00:45:05.714385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.623 [2024-05-15 00:45:05.714396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.623 [2024-05-15 00:45:05.725702] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.623 [2024-05-15 00:45:05.725731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.623 [2024-05-15 00:45:05.725742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.623 [2024-05-15 00:45:05.736184] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.623 [2024-05-15 00:45:05.736213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.623 [2024-05-15 00:45:05.736224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.623 [2024-05-15 00:45:05.744988] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.623 [2024-05-15 00:45:05.745018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.623 [2024-05-15 00:45:05.745029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.623 [2024-05-15 00:45:05.755791] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.623 [2024-05-15 00:45:05.755819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.623 [2024-05-15 00:45:05.755830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.623 [2024-05-15 00:45:05.768233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.623 [2024-05-15 00:45:05.768263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.623 [2024-05-15 00:45:05.768274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.623 [2024-05-15 00:45:05.781017] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.623 [2024-05-15 00:45:05.781052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:22892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.623 [2024-05-15 00:45:05.781063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.792517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.792548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.792564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.802425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.802455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.802466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.810887] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.810914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.810924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.820459] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.820493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:8355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.820504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.829842] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.829871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.829881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.839182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.839210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.839221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.848343] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.848372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:23980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.848384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.858098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.858127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.858138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.866819] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.866846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.866857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.878088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.878116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.878126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.886200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.886226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.886236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.897745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.897771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.897781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.909824] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.909851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.909862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.918494] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.918523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.918536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.930698] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.930727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.930738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.941174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.941200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.941211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.949734] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.949762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:25 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.949772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.959700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.959727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.959738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.969057] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.969084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.969095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.977473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.977499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.977510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.987072] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.987105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:14824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.880 [2024-05-15 00:45:05.987116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.880 [2024-05-15 00:45:05.996293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.880 [2024-05-15 00:45:05.996322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:9795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.881 [2024-05-15 00:45:05.996333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.881 [2024-05-15 00:45:06.006054] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.881 [2024-05-15 00:45:06.006081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:19176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.881 [2024-05-15 00:45:06.006092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.881 [2024-05-15 00:45:06.015156] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.881 [2024-05-15 00:45:06.015185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.881 [2024-05-15 00:45:06.015197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.881 [2024-05-15 00:45:06.024567] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.881 [2024-05-15 00:45:06.024597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.881 [2024-05-15 00:45:06.024609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.881 [2024-05-15 00:45:06.033792] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:39.881 [2024-05-15 00:45:06.033821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.881 [2024-05-15 00:45:06.033832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.137 [2024-05-15 00:45:06.042894] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.137 [2024-05-15 00:45:06.042922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.137 [2024-05-15 00:45:06.042933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.137 [2024-05-15 00:45:06.052678] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.137 [2024-05-15 00:45:06.052707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.137 [2024-05-15 00:45:06.052718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.137 [2024-05-15 00:45:06.063515] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.137 [2024-05-15 00:45:06.063544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.137 [2024-05-15 00:45:06.063559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.137 [2024-05-15 00:45:06.072220] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.137 [2024-05-15 00:45:06.072246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.072258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.087308] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.087338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.087348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.099635] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.099666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.099683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.111329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.111360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:10438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.111370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.119853] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.119881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.119892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.130409] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.130436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.130446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.141663] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.141695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.141707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.152625] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.152655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.152665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.161198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.161229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:16795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.161240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.172341] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.172368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.172378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.181640] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.181672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.181684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.192749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.192777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.192791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.202463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.202490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.202501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.212943] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.212970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.212980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.222264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.222292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.222302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.231727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.231754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.231764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.241164] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.241192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.241203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.252330] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.252362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.252375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.261908] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.261936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.261947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.271158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.271186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.271197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.280986] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.281014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.281026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 [2024-05-15 00:45:06.290370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:40.138 [2024-05-15 00:45:06.290396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17459 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.138 [2024-05-15 00:45:06.290407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.138 00:29:40.138 Latency(us) 00:29:40.138 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.138 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:40.138 nvme0n1 : 2.00 24847.80 97.06 0.00 0.00 5145.89 2621.44 16832.40 00:29:40.138 =================================================================================================================== 00:29:40.138 Total : 24847.80 97.06 0.00 0.00 5145.89 2621.44 16832.40 00:29:40.138 0 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:40.397 | .driver_specific 00:29:40.397 | .nvme_error 00:29:40.397 | .status_code 00:29:40.397 | .command_transient_transport_error' 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 195 > 0 )) 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2182278 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2182278 ']' 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2182278 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2182278 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2182278' 00:29:40.397 killing process with pid 2182278 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2182278 00:29:40.397 Received shutdown signal, test time was about 2.000000 seconds 00:29:40.397 00:29:40.397 Latency(us) 00:29:40.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:40.397 =================================================================================================================== 00:29:40.397 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:40.397 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2182278 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2183573 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2183573 /var/tmp/bperf.sock 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2183573 ']' 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:40.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:40.966 00:45:06 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:40.966 [2024-05-15 00:45:06.950187] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:29:40.966 [2024-05-15 00:45:06.950335] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2183573 ] 00:29:40.966 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:40.966 Zero copy mechanism will not be used. 00:29:40.966 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.966 [2024-05-15 00:45:07.074997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.225 [2024-05-15 00:45:07.171755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.483 00:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:41.483 00:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:29:41.483 00:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:41.483 00:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:41.740 00:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:41.740 00:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.740 00:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.740 00:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.740 00:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.740 00:45:07 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.998 nvme0n1 00:29:41.998 00:45:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:41.998 00:45:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:41.998 00:45:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.998 00:45:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:41.998 00:45:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:41.998 00:45:08 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:42.256 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:42.256 Zero copy mechanism will not be used. 00:29:42.256 Running I/O for 2 seconds... 00:29:42.256 [2024-05-15 00:45:08.229300] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.229357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.229372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.233522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.233571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.233584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.237687] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.237721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.237734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.241849] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.241881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.241893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.246019] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.246059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.246071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.250036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.250068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.250080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.254047] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.254078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.254089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.258571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.258607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.258619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.262900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.262932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.262944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.269379] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.269408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.269421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.274893] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.274923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.274934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.279639] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.279671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.279683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.283963] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.283994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.284005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.287932] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.287964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.287976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.292178] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.292209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.292220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.296743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.296785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.296796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.302847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.302878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.302889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.310108] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.310138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.310150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.316790] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.316825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.316837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.322679] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.322710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.322721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.256 [2024-05-15 00:45:08.327946] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.256 [2024-05-15 00:45:08.327977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.256 [2024-05-15 00:45:08.327989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.334139] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.334173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.334185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.340774] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.340804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.340815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.347148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.347181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.347193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.354201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.354232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.354244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.361416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.361449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.361460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.368324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.368353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.368364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.374298] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.374333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.374345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.379148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.379182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.379193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.383826] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.383861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.383874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.388802] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.388835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.388846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.393248] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.393278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.393289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.397334] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.397367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.397379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.401290] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.401322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.401333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.405228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.405259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.405271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.409129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.409161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.409173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.413159] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.413192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.413204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.257 [2024-05-15 00:45:08.417203] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.257 [2024-05-15 00:45:08.417234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.257 [2024-05-15 00:45:08.417246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.517 [2024-05-15 00:45:08.421217] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.517 [2024-05-15 00:45:08.421253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.517 [2024-05-15 00:45:08.421265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.517 [2024-05-15 00:45:08.425323] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.517 [2024-05-15 00:45:08.425358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.517 [2024-05-15 00:45:08.425371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.517 [2024-05-15 00:45:08.429400] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.517 [2024-05-15 00:45:08.429432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.517 [2024-05-15 00:45:08.429444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.517 [2024-05-15 00:45:08.433511] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.517 [2024-05-15 00:45:08.433541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.517 [2024-05-15 00:45:08.433558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.437902] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.437938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.437949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.443715] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.443744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.443755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.449904] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.449936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.449949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.454700] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.454731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.454742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.459510] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.459540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.459557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.464261] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.464294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.464306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.469018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.469051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.469062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.473396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.473429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.473442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.478741] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.478772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.478784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.485507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.485536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.485548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.490417] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.490454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.490472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.494742] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.494775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.494786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.499250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.499285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.499298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.505312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.505343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.505358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.511181] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.511214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.511225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.515909] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.515939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.515950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.520781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.520812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.520823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.525329] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.525360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.525371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.529907] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.529937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.529948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.535839] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.535873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.535884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.541726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.541757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.541768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.546466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.546494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.546504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.551571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.551602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.551612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.556378] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.556408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.556418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.562609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.562637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.562646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.569481] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.569512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.569523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.575573] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.518 [2024-05-15 00:45:08.575603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.518 [2024-05-15 00:45:08.575614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.518 [2024-05-15 00:45:08.579726] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.579754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.579765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.582350] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.582382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.582393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.587043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.587073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.587084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.591545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.591588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.591602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.596279] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.596310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.596320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.601609] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.601639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.601650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.607838] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.607872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.607884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.612297] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.612328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.612339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.616967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.616997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.617007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.621846] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.621879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.621890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.625807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.625838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.625849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.629808] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.629839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.629849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.633809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.633839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.633850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.637880] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.637910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.637920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.642565] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.642594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.642605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.647013] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.647045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.647056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.651693] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.651723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.651734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.655747] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.655777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.655788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.659843] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.659877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.659888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.664040] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.664069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.664080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.668628] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.668663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.668678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.519 [2024-05-15 00:45:08.673962] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.519 [2024-05-15 00:45:08.673995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.519 [2024-05-15 00:45:08.674007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.679676] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.679707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.679718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.685942] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.685973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.685984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.691924] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.691957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.691967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.697158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.697194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.697206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.701503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.701536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.701547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.706080] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.706114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.706125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.711444] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.711476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.711486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.715656] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.715695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.715706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.720541] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.720578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.720588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.724928] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.724962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.724981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.729115] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.729148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.729159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.733295] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.733326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.733341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.737524] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.737573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.737585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.741463] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.741494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.741505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.745369] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.781 [2024-05-15 00:45:08.745400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.781 [2024-05-15 00:45:08.745411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.781 [2024-05-15 00:45:08.749163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.749195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.749211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.753083] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.753114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.753125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.756905] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.756937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.756947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.760878] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.760908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.760919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.764836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.764870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.764882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.769110] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.769140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.769151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.773042] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.773072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.773083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.776967] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.776996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.777007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.781032] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.781063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.781073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.785362] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.785401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.785412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.790439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.790471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.790482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.795729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.795763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.795773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.803806] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.803837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.803848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.811078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.811107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.811117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.818062] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.818094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.818106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.824629] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.824658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.824669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.831473] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.831502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.831512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.838192] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.838226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.838245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.846530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.846566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.846578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.853517] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.853549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.853565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.860571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.860603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.860614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.867466] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.867496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.867506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.874310] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.874343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.874355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.880324] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.880356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.880366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.884777] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.884810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.884821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.889503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.889533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.889544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.894169] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.894204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.782 [2024-05-15 00:45:08.894215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.782 [2024-05-15 00:45:08.898137] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.782 [2024-05-15 00:45:08.898169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.783 [2024-05-15 00:45:08.898180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.783 [2024-05-15 00:45:08.902097] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.783 [2024-05-15 00:45:08.902128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.783 [2024-05-15 00:45:08.902139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.783 [2024-05-15 00:45:08.906344] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.783 [2024-05-15 00:45:08.906380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.783 [2024-05-15 00:45:08.906391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.783 [2024-05-15 00:45:08.911585] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.783 [2024-05-15 00:45:08.911618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.783 [2024-05-15 00:45:08.911630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.783 [2024-05-15 00:45:08.918410] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.783 [2024-05-15 00:45:08.918443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.783 [2024-05-15 00:45:08.918454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:42.783 [2024-05-15 00:45:08.924717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.783 [2024-05-15 00:45:08.924749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.783 [2024-05-15 00:45:08.924760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:42.783 [2024-05-15 00:45:08.931881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.783 [2024-05-15 00:45:08.931913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.783 [2024-05-15 00:45:08.931924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:42.783 [2024-05-15 00:45:08.937056] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.783 [2024-05-15 00:45:08.937090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.783 [2024-05-15 00:45:08.937102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:42.783 [2024-05-15 00:45:08.941607] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:42.783 [2024-05-15 00:45:08.941641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.783 [2024-05-15 00:45:08.941653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.043 [2024-05-15 00:45:08.946129] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.043 [2024-05-15 00:45:08.946161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.043 [2024-05-15 00:45:08.946172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.043 [2024-05-15 00:45:08.951425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.043 [2024-05-15 00:45:08.951463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.043 [2024-05-15 00:45:08.951474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.043 [2024-05-15 00:45:08.956641] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.043 [2024-05-15 00:45:08.956675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.043 [2024-05-15 00:45:08.956687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.043 [2024-05-15 00:45:08.962001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.043 [2024-05-15 00:45:08.962033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.043 [2024-05-15 00:45:08.962043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.043 [2024-05-15 00:45:08.966352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.043 [2024-05-15 00:45:08.966384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.043 [2024-05-15 00:45:08.966394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.043 [2024-05-15 00:45:08.970439] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.043 [2024-05-15 00:45:08.970469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.043 [2024-05-15 00:45:08.970480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.043 [2024-05-15 00:45:08.974975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:08.975008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:08.975020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:08.982027] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:08.982063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:08.982074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:08.989201] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:08.989233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:08.989244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:08.997086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:08.997117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:08.997127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.005278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.005307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.005317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.012571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.012606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.012617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.017983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.018015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.018025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.021713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.021747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.021759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.025717] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.025748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.025759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.029601] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.029633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.029644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.033598] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.033635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.033647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.037619] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.037653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.037664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.041596] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.041629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.041640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.045706] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.045738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.045750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.050328] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.050362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.050373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.057301] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.057333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.057348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.064109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.064142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.064153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.071749] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.071780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.071792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.078618] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.078652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.078662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.085586] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.085614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.085624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.092580] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.092613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.092626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.099836] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.099868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.099879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.107179] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.107211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.107222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.114230] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.114260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.114271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.121109] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.121141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.121153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.128114] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.128150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.128163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.135530] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.135576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.044 [2024-05-15 00:45:09.135590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.044 [2024-05-15 00:45:09.143043] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.044 [2024-05-15 00:45:09.143077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.143089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.045 [2024-05-15 00:45:09.150983] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.045 [2024-05-15 00:45:09.151016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.151027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.045 [2024-05-15 00:45:09.156484] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.045 [2024-05-15 00:45:09.156524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.156535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.045 [2024-05-15 00:45:09.160743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.045 [2024-05-15 00:45:09.160776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.160787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.045 [2024-05-15 00:45:09.165118] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.045 [2024-05-15 00:45:09.165150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.165160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.045 [2024-05-15 00:45:09.170321] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.045 [2024-05-15 00:45:09.170354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.170365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.045 [2024-05-15 00:45:09.175157] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.045 [2024-05-15 00:45:09.175193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.175204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.045 [2024-05-15 00:45:09.179370] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.045 [2024-05-15 00:45:09.179407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.179419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.045 [2024-05-15 00:45:09.183648] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.045 [2024-05-15 00:45:09.183679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.183694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.045 [2024-05-15 00:45:09.185917] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.045 [2024-05-15 00:45:09.185949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.185959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.045 [2024-05-15 00:45:09.189522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.045 [2024-05-15 00:45:09.189560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.189575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.045 [2024-05-15 00:45:09.193631] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.045 [2024-05-15 00:45:09.193664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.193675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.045 [2024-05-15 00:45:09.198106] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.045 [2024-05-15 00:45:09.198141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.198153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.045 [2024-05-15 00:45:09.203025] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.045 [2024-05-15 00:45:09.203059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.045 [2024-05-15 00:45:09.203070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.304 [2024-05-15 00:45:09.208158] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.304 [2024-05-15 00:45:09.208192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-05-15 00:45:09.208204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.304 [2024-05-15 00:45:09.212809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.304 [2024-05-15 00:45:09.212843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-05-15 00:45:09.212855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.304 [2024-05-15 00:45:09.217835] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.304 [2024-05-15 00:45:09.217867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-05-15 00:45:09.217879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.304 [2024-05-15 00:45:09.223730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.304 [2024-05-15 00:45:09.223764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-05-15 00:45:09.223777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.304 [2024-05-15 00:45:09.230603] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.304 [2024-05-15 00:45:09.230634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-05-15 00:45:09.230645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.304 [2024-05-15 00:45:09.238926] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.304 [2024-05-15 00:45:09.238955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.304 [2024-05-15 00:45:09.238967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.304 [2024-05-15 00:45:09.245571] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.245601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.245612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.252365] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.252394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.252404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.259358] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.259388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.259399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.267174] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.267206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.267220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.274339] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.274375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.274387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.281390] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.281421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.281436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.288208] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.288242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.288254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.295119] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.295151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.295162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.302182] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.302211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.302222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.310048] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.310078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.310089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.314881] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.314912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.314923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.319385] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.319422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.319433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.323521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.323571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.323584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.328123] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.328154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.328166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.334827] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.334860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.334871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.340113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.340146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.340157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.347132] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.347166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.347177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.354972] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.355004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.355015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.362276] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.362310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.362321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.369245] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.369276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.369287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.375927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.375957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.375968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.382854] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.382889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.382901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.391364] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.391399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.391417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.399168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.399199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.399210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.406196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.406226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.406236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.413011] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.413048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.413058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.419655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.305 [2024-05-15 00:45:09.419687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.305 [2024-05-15 00:45:09.419698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.305 [2024-05-15 00:45:09.424837] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.306 [2024-05-15 00:45:09.424869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-05-15 00:45:09.424880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.306 [2024-05-15 00:45:09.429812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.306 [2024-05-15 00:45:09.429846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-05-15 00:45:09.429859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.306 [2024-05-15 00:45:09.434387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.306 [2024-05-15 00:45:09.434419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-05-15 00:45:09.434431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.306 [2024-05-15 00:45:09.438793] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.306 [2024-05-15 00:45:09.438824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-05-15 00:45:09.438836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.306 [2024-05-15 00:45:09.443847] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.306 [2024-05-15 00:45:09.443888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-05-15 00:45:09.443899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.306 [2024-05-15 00:45:09.448148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.306 [2024-05-15 00:45:09.448180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-05-15 00:45:09.448191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.306 [2024-05-15 00:45:09.452009] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.306 [2024-05-15 00:45:09.452044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-05-15 00:45:09.452065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.306 [2024-05-15 00:45:09.456198] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.306 [2024-05-15 00:45:09.456231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-05-15 00:45:09.456242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.306 [2024-05-15 00:45:09.460784] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.306 [2024-05-15 00:45:09.460816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-05-15 00:45:09.460827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.306 [2024-05-15 00:45:09.465851] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.306 [2024-05-15 00:45:09.465883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.306 [2024-05-15 00:45:09.465894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.470098] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.470134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.470146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.474396] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.474428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.474439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.478478] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.478509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.478524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.482306] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.482339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.482349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.486360] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.486392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.486403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.491022] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.491053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.491064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.497933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.497963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.497973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.502423] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.502456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.502469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.507435] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.507465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.507476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.511945] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.511977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.511989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.515933] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.515965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.515976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.519950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.519990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.520001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.524036] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.524069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.524082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.528746] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.528778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.528791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.534233] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.534264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.534274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.540337] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.540372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.540389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.544776] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.544809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.544820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.549168] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.549204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.549216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.554163] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.554196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.554207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.558658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.558690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.558707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.562655] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.562690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.562704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.567228] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.567261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.567272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.573762] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.573796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.573807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.565 [2024-05-15 00:45:09.579721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.565 [2024-05-15 00:45:09.579753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.565 [2024-05-15 00:45:09.579764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.587522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.587557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.587568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.593570] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.593601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.593612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.598099] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.598131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.598142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.602253] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.602286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.602297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.606161] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.606198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.606210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.610452] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.610483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.610496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.615807] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.615842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.615854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.622637] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.622668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.622679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.626282] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.626318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.626330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.634536] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.634575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.634588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.640581] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.640612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.640623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.645312] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.645345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.645357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.649148] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.649179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.649189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.653088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.653118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.653129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.656850] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.656883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.656894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.660812] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.660850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.660861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.664622] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.664654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.664665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.668518] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.668555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.668570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.673010] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.673042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.673055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.679455] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.679485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.679496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.684061] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.684090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.684101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.689278] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.689315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.689327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.695661] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.566 [2024-05-15 00:45:09.695692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.566 [2024-05-15 00:45:09.695703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.566 [2024-05-15 00:45:09.701088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.567 [2024-05-15 00:45:09.701121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.567 [2024-05-15 00:45:09.701134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.567 [2024-05-15 00:45:09.705487] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.567 [2024-05-15 00:45:09.705518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.567 [2024-05-15 00:45:09.705529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.567 [2024-05-15 00:45:09.709805] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.567 [2024-05-15 00:45:09.709837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.567 [2024-05-15 00:45:09.709848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.567 [2024-05-15 00:45:09.714416] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.567 [2024-05-15 00:45:09.714445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.567 [2024-05-15 00:45:09.714456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.567 [2024-05-15 00:45:09.719809] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.567 [2024-05-15 00:45:09.719845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.567 [2024-05-15 00:45:09.719857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.567 [2024-05-15 00:45:09.724386] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.567 [2024-05-15 00:45:09.724418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.567 [2024-05-15 00:45:09.724430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.825 [2024-05-15 00:45:09.728795] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.825 [2024-05-15 00:45:09.728830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.825 [2024-05-15 00:45:09.728843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.825 [2024-05-15 00:45:09.733028] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.825 [2024-05-15 00:45:09.733059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.825 [2024-05-15 00:45:09.733070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.825 [2024-05-15 00:45:09.736947] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.825 [2024-05-15 00:45:09.736978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.825 [2024-05-15 00:45:09.736990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.825 [2024-05-15 00:45:09.740727] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.825 [2024-05-15 00:45:09.740756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.825 [2024-05-15 00:45:09.740766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.825 [2024-05-15 00:45:09.744594] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.825 [2024-05-15 00:45:09.744623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.825 [2024-05-15 00:45:09.744634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.825 [2024-05-15 00:45:09.748975] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.825 [2024-05-15 00:45:09.749012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.825 [2024-05-15 00:45:09.749030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.825 [2024-05-15 00:45:09.754556] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.825 [2024-05-15 00:45:09.754587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.825 [2024-05-15 00:45:09.754598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.759866] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.759897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.759909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.764759] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.764790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.764802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.769930] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.769969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.769980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.774503] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.774534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.774546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.778472] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.778504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.778515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.782604] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.782641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.782654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.787720] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.787753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.787765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.794035] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.794071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.794083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.798721] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.798753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.798766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.803545] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.803589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.803602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.806568] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.806599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.806610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.812658] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.812690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.812702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.817104] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.817136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.817146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.821612] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.821650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.821662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.826001] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.826043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.826054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.829889] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.829919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.829930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.833765] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.833796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.833807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.837611] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.837641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.837653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.842005] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.842034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.842045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.847034] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.847074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.847087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.851733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.851766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.851778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.855874] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.855906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.855918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.860507] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.860539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.860558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.864513] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.864545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.864568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.869088] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.869119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.869131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.875557] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.875589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.875600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.880205] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.826 [2024-05-15 00:45:09.880236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.826 [2024-05-15 00:45:09.880247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.826 [2024-05-15 00:45:09.884900] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.884935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.884949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.890029] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.890061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.890072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.895950] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.895979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.895991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.902467] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.902501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.902513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.908898] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.908928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.908940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.914196] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.914230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.914242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.918743] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.918774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.918785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.923368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.923399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.923412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.928729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.928763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.928776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.935589] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.935621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.935638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.942825] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.942856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.942867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.949733] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.949767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.949779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.955555] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.955587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.955598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.960071] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.960100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.960112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.964522] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.964556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.964568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.969388] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.969418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.969429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.974858] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.974889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.974900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.979293] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.979323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.979335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:43.827 [2024-05-15 00:45:09.984204] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:43.827 [2024-05-15 00:45:09.984236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.827 [2024-05-15 00:45:09.984248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.085 [2024-05-15 00:45:09.989264] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.085 [2024-05-15 00:45:09.989294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.085 [2024-05-15 00:45:09.989306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.085 [2024-05-15 00:45:09.993781] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.085 [2024-05-15 00:45:09.993810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.085 [2024-05-15 00:45:09.993821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:09.999479] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:09.999511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:09.999523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.005113] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.005152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.005166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.012219] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.012266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.012284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.018055] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.018100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.018115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.023015] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.023049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.023062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.028207] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.028256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.028278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.033464] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.033505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.033521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.038352] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.038387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.038399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.042713] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.042746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.042758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.047018] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.047048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.047061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.050852] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.050887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.050900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.054744] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.054778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.054790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.058680] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.058711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.058723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.064086] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.064117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.064129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.068084] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.068115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.068126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.072704] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.072800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.072855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.077544] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.077592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.077618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.082425] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.082467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.082480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.087521] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.087566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.087584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.091997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.092033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.092044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.097368] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.097402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.097415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.101786] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.101819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.101830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.105664] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.105699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.105716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.110020] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.110050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.110062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.115927] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.115957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.115968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.121745] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.121783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.121795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.128538] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.086 [2024-05-15 00:45:10.128575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.086 [2024-05-15 00:45:10.128586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.086 [2024-05-15 00:45:10.135436] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.135469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.135481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.140766] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.140801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.140813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.144696] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.144729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.144741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.148499] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.148531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.148542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.152314] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.152346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.152359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.156146] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.156180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.156192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.159997] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.160032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.160044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.163841] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.163875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.163887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.167729] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.167761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.167772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.171563] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.171599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.171611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.175387] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.175418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.175429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.179212] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.179245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.179258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.183136] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.183166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.183184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.187537] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.187577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.187588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.193078] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.193109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.193120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.198991] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.199026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.199037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.203730] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.203760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.203773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.208200] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.208229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.208241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.212722] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.212752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.212763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:44.087 [2024-05-15 00:45:10.217250] nvme_tcp.c:1450:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6150003a1400) 00:29:44.087 [2024-05-15 00:45:10.217282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.087 [2024-05-15 00:45:10.217295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:44.087 00:29:44.087 Latency(us) 00:29:44.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.087 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:44.087 nvme0n1 : 2.00 5999.00 749.87 0.00 0.00 2663.88 582.06 11037.64 00:29:44.087 =================================================================================================================== 00:29:44.087 Total : 5999.00 749.87 0.00 0.00 2663.88 582.06 11037.64 00:29:44.087 0 00:29:44.087 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:44.087 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:44.087 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:44.087 | .driver_specific 00:29:44.087 | .nvme_error 00:29:44.088 | .status_code 00:29:44.088 | .command_transient_transport_error' 00:29:44.088 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:44.345 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 387 > 0 )) 00:29:44.345 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2183573 00:29:44.345 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2183573 ']' 00:29:44.345 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2183573 00:29:44.345 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:29:44.345 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:44.345 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2183573 00:29:44.345 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:44.345 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:44.345 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2183573' 00:29:44.345 killing process with pid 2183573 00:29:44.345 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2183573 00:29:44.345 Received shutdown signal, test time was about 2.000000 seconds 00:29:44.345 00:29:44.345 Latency(us) 00:29:44.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.345 =================================================================================================================== 00:29:44.345 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.345 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2183573 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2184238 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2184238 /var/tmp/bperf.sock 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2184238 ']' 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:44.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:44.911 00:45:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.911 [2024-05-15 00:45:10.873832] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:29:44.911 [2024-05-15 00:45:10.873979] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184238 ] 00:29:44.911 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.911 [2024-05-15 00:45:11.007892] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.171 [2024-05-15 00:45:11.099431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.740 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:45.740 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:29:45.740 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:45.740 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:45.740 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:45.740 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.740 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:45.740 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.740 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.740 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:45.998 nvme0n1 00:29:45.998 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:45.998 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:45.998 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:45.998 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:45.998 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:45.998 00:45:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:45.998 Running I/O for 2 seconds... 00:29:45.998 [2024-05-15 00:45:12.021149] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.021342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.021384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.030739] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.030910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.030944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.040276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.040445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.040483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.049751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.049918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.049945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.059315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.059478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.059504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.068805] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.068964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.068990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.078277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.078441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.078466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.087811] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.087969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.087995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.097298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.097460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.097484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.106795] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.106955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.106980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.116311] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.116476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.116499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.125783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.125947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.125972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.135294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.135456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.135480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.144767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.144928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.144952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:45.998 [2024-05-15 00:45:12.154238] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:45.998 [2024-05-15 00:45:12.154400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:45.998 [2024-05-15 00:45:12.154423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.163721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.163885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.163909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.173207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.173370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.173393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.182700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.182865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.182888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.192198] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.192357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.192381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.201671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.201830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.201852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.211162] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.211322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.211348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.220679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.220841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.220864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.230148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.230310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.230342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.239669] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.239829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.239853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.249151] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.249313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.249337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.258650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.258810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.258833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.268153] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.268313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.268337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.277639] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.277800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.277824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.287163] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.287323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.287350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.296638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.296798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.257 [2024-05-15 00:45:12.296823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.257 [2024-05-15 00:45:12.306113] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.257 [2024-05-15 00:45:12.306275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.258 [2024-05-15 00:45:12.306299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.258 [2024-05-15 00:45:12.315634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.258 [2024-05-15 00:45:12.315796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.258 [2024-05-15 00:45:12.315820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.258 [2024-05-15 00:45:12.325413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.258 [2024-05-15 00:45:12.325613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.258 [2024-05-15 00:45:12.325642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.258 [2024-05-15 00:45:12.336963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.258 [2024-05-15 00:45:12.337125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.258 [2024-05-15 00:45:12.337150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.258 [2024-05-15 00:45:12.346460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.258 [2024-05-15 00:45:12.346628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.258 [2024-05-15 00:45:12.346654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.258 [2024-05-15 00:45:12.355927] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.258 [2024-05-15 00:45:12.356089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.258 [2024-05-15 00:45:12.356116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.258 [2024-05-15 00:45:12.365457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.258 [2024-05-15 00:45:12.365622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.258 [2024-05-15 00:45:12.365647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.258 [2024-05-15 00:45:12.374942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.258 [2024-05-15 00:45:12.375101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.258 [2024-05-15 00:45:12.375125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.258 [2024-05-15 00:45:12.384404] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.258 [2024-05-15 00:45:12.384568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.258 [2024-05-15 00:45:12.384590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.258 [2024-05-15 00:45:12.393943] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.258 [2024-05-15 00:45:12.394107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.258 [2024-05-15 00:45:12.394134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.258 [2024-05-15 00:45:12.403420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.258 [2024-05-15 00:45:12.403585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.258 [2024-05-15 00:45:12.403611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.258 [2024-05-15 00:45:12.412919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.258 [2024-05-15 00:45:12.413078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.258 [2024-05-15 00:45:12.413103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.422430] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.422594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.422620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.431922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.432083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.432106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.441443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.441608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.441634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.450935] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.451095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.451130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.460405] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.460568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.460592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.469954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.470113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.470141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.479434] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.479596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.479622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.488919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.489080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.489106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.498453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.498619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.498644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.507944] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.508104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.508128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.517454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.517621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:54 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.517643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.526941] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.527103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.527127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.536426] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.536596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.536623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.517 [2024-05-15 00:45:12.545948] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.517 [2024-05-15 00:45:12.546110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.517 [2024-05-15 00:45:12.546138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.555422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.555588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.555614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.564901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.565061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.565085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.574407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.574570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.574595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.583883] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.584043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.584068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.593376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.593542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.593570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.602894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.603054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.603083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.612362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.612521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.612546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.621904] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.622065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.622090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.631397] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.631562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.631586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.640877] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.641036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.641061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.650388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.650549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.650578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.659866] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.660026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.660051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.669387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.669549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.669578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.518 [2024-05-15 00:45:12.678869] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.518 [2024-05-15 00:45:12.679028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.518 [2024-05-15 00:45:12.679055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.688362] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.688523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.688548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.697893] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.698057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.698084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.707375] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.707536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.707563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.716859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.717020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.717048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.726459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.726625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.726650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.735951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.736110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:687 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.736136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.745455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.745621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.745648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.754931] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.755093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.755117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.764401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.764564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.764589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.773907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.774066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.774090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.783361] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.783533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.783559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.792832] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.792991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.793014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.802568] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.802741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.802765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.812055] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.812214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.812238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.821549] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.821721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.821745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.831028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.831189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.831214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.840519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.840684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.840710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.850027] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.850187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.850211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.859511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.859675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.778 [2024-05-15 00:45:12.859700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.778 [2024-05-15 00:45:12.868995] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.778 [2024-05-15 00:45:12.869156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.779 [2024-05-15 00:45:12.869180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.779 [2024-05-15 00:45:12.878490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.779 [2024-05-15 00:45:12.878655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.779 [2024-05-15 00:45:12.878679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.779 [2024-05-15 00:45:12.887954] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.779 [2024-05-15 00:45:12.888114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.779 [2024-05-15 00:45:12.888137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.779 [2024-05-15 00:45:12.897450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.779 [2024-05-15 00:45:12.897613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.779 [2024-05-15 00:45:12.897636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.779 [2024-05-15 00:45:12.906925] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.779 [2024-05-15 00:45:12.907087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.779 [2024-05-15 00:45:12.907110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.779 [2024-05-15 00:45:12.916390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.779 [2024-05-15 00:45:12.916557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.779 [2024-05-15 00:45:12.916580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.779 [2024-05-15 00:45:12.925895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.779 [2024-05-15 00:45:12.926059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.779 [2024-05-15 00:45:12.926082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:46.779 [2024-05-15 00:45:12.935384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:46.779 [2024-05-15 00:45:12.935559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:46.779 [2024-05-15 00:45:12.935583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:12.944871] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:12.945033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:12.945059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:12.954401] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:12.954566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:12.954589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:12.963895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:12.964055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:12.964079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:12.973465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:12.973630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:12.973655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:12.982963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:12.983123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:12.983147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:12.992443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:12.992608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:12.992634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.001979] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.002140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.002164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.011459] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.011626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.011651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.020936] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.021097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.021120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.030445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.030612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.030637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.039919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.040082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.040105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.049439] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.049605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.049630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.058930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.059091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.059114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.068400] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.068566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.068591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.077910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.078075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.078098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.087365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.087525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.087549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.096818] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.096980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.097003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.106340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.106500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.106527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.115807] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.115968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.115991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.125289] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.125449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.125471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.134778] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.134948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.134972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.144259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.144420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.144445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.040 [2024-05-15 00:45:13.153772] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.040 [2024-05-15 00:45:13.153932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.040 [2024-05-15 00:45:13.153956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.041 [2024-05-15 00:45:13.163250] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.041 [2024-05-15 00:45:13.163411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.041 [2024-05-15 00:45:13.163435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.041 [2024-05-15 00:45:13.172709] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.041 [2024-05-15 00:45:13.172873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.041 [2024-05-15 00:45:13.172897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.041 [2024-05-15 00:45:13.182207] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.041 [2024-05-15 00:45:13.182371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.041 [2024-05-15 00:45:13.182394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.041 [2024-05-15 00:45:13.191672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.041 [2024-05-15 00:45:13.191838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.041 [2024-05-15 00:45:13.191862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.041 [2024-05-15 00:45:13.201176] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.041 [2024-05-15 00:45:13.201339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.041 [2024-05-15 00:45:13.201363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.301 [2024-05-15 00:45:13.210677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.301 [2024-05-15 00:45:13.210841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.301 [2024-05-15 00:45:13.210866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.220121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.220283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.220308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.229651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.229813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.229839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.239114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.239275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.239298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.248601] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.248762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.248787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.258099] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.258264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.258290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.268208] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.268398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.268427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.279030] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.279190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.279214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.288531] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.288694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.288719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.297984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.298146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.298169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.307496] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.307659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.307684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.316990] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.317152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.317177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.326454] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.326619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.326643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.335970] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.336132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.336155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.345423] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.345588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.345613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.354907] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.355072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.355099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.364412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.364577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.364602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.373888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.374047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.374072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.383541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.383718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.383743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.394746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.394920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.394946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.404412] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.404579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.404605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.413922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.414082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.414107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.423391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.423556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.423581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.432860] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.433017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.433040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.442352] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.442515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.442539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.451822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.451985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.452008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.302 [2024-05-15 00:45:13.461316] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.302 [2024-05-15 00:45:13.461476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.302 [2024-05-15 00:45:13.461500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.470804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.470965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.470989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.480278] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.480437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.480461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.489790] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.489952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.489976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.499254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.499415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.499438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.508737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.508898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.508920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.518246] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.518408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.518431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.527706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.527868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.527891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.537215] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.537374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.537398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.546689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.546852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.546875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.556148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.556307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.556332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.565643] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.565806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.565836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.575123] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.575282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.575306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.584597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.584758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.584782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.594114] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.594274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.594298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.603578] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.603738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.603766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.613059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.613219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.613244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.622522] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.622688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.622712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.561 [2024-05-15 00:45:13.631991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.561 [2024-05-15 00:45:13.632152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.561 [2024-05-15 00:45:13.632176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.562 [2024-05-15 00:45:13.641498] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.562 [2024-05-15 00:45:13.641663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.562 [2024-05-15 00:45:13.641687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.562 [2024-05-15 00:45:13.650937] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.562 [2024-05-15 00:45:13.651098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.562 [2024-05-15 00:45:13.651120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.562 [2024-05-15 00:45:13.660403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.562 [2024-05-15 00:45:13.660566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.562 [2024-05-15 00:45:13.660592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.562 [2024-05-15 00:45:13.669899] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.562 [2024-05-15 00:45:13.670060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.562 [2024-05-15 00:45:13.670083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.562 [2024-05-15 00:45:13.679373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.562 [2024-05-15 00:45:13.679536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.562 [2024-05-15 00:45:13.679564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.562 [2024-05-15 00:45:13.688843] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.562 [2024-05-15 00:45:13.689010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.562 [2024-05-15 00:45:13.689033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.562 [2024-05-15 00:45:13.698342] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.562 [2024-05-15 00:45:13.698503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.562 [2024-05-15 00:45:13.698528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.562 [2024-05-15 00:45:13.707789] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.562 [2024-05-15 00:45:13.707951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.562 [2024-05-15 00:45:13.707973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.562 [2024-05-15 00:45:13.717313] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.562 [2024-05-15 00:45:13.717475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.562 [2024-05-15 00:45:13.717498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.726770] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.726936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.726958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.736264] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.736426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.736449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.745787] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.745950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.745974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.755360] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.755537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.755565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.764910] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.765072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.765096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.774443] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.774616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.774643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.783953] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.784114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.784138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.793490] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.793674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.793701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.803199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.803369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.803394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.812711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.812873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.812898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.822247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.822409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.822435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.831762] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.831926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.831952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.842415] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.842627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.842658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.853136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.853297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.853328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.862627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.862790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.862817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.872157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.872318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.872345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.881671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.881835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.881862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.891154] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.891316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.891341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.900703] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.900865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.900892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.910187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.910348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.910372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.919684] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.919844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.919869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.819 [2024-05-15 00:45:13.929219] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.819 [2024-05-15 00:45:13.929381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.819 [2024-05-15 00:45:13.929406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.820 [2024-05-15 00:45:13.938713] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.820 [2024-05-15 00:45:13.938873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.820 [2024-05-15 00:45:13.938900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.820 [2024-05-15 00:45:13.948237] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.820 [2024-05-15 00:45:13.948398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.820 [2024-05-15 00:45:13.948424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.820 [2024-05-15 00:45:13.957719] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.820 [2024-05-15 00:45:13.957878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.820 [2024-05-15 00:45:13.957902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.820 [2024-05-15 00:45:13.967199] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.820 [2024-05-15 00:45:13.967358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.820 [2024-05-15 00:45:13.967382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:47.820 [2024-05-15 00:45:13.976786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:47.820 [2024-05-15 00:45:13.976947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:47.820 [2024-05-15 00:45:13.976972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.077 [2024-05-15 00:45:13.986274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:48.077 [2024-05-15 00:45:13.986433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.077 [2024-05-15 00:45:13.986457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.077 [2024-05-15 00:45:13.995783] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:48.077 [2024-05-15 00:45:13.995943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.077 [2024-05-15 00:45:13.995970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.077 [2024-05-15 00:45:14.005282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:29:48.077 [2024-05-15 00:45:14.005442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.077 [2024-05-15 00:45:14.005468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.077 00:29:48.077 Latency(us) 00:29:48.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.077 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:48.077 nvme0n1 : 2.00 26687.94 104.25 0.00 0.00 4787.84 3759.70 11865.47 00:29:48.077 =================================================================================================================== 00:29:48.077 Total : 26687.94 104.25 0.00 0.00 4787.84 3759.70 11865.47 00:29:48.077 0 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:48.077 | .driver_specific 00:29:48.077 | .nvme_error 00:29:48.077 | .status_code 00:29:48.077 | .command_transient_transport_error' 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 209 > 0 )) 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2184238 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2184238 ']' 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2184238 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2184238 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2184238' 00:29:48.077 killing process with pid 2184238 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2184238 00:29:48.077 Received shutdown signal, test time was about 2.000000 seconds 00:29:48.077 00:29:48.077 Latency(us) 00:29:48.077 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.077 =================================================================================================================== 00:29:48.077 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:48.077 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2184238 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2185084 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2185084 /var/tmp/bperf.sock 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@828 -- # '[' -z 2185084 ']' 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:48.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:48.642 00:45:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:48.642 [2024-05-15 00:45:14.612794] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:29:48.642 [2024-05-15 00:45:14.612872] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185084 ] 00:29:48.642 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:48.642 Zero copy mechanism will not be used. 00:29:48.642 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.642 [2024-05-15 00:45:14.697195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.642 [2024-05-15 00:45:14.793472] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.209 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:49.209 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@861 -- # return 0 00:29:49.209 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:49.209 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:49.468 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:49.468 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.468 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.468 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.468 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:49.468 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:49.726 nvme0n1 00:29:49.726 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:49.726 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@560 -- # xtrace_disable 00:29:49.726 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:49.726 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:29:49.726 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:49.726 00:45:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:49.985 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:49.985 Zero copy mechanism will not be used. 00:29:49.985 Running I/O for 2 seconds... 00:29:49.985 [2024-05-15 00:45:15.934711] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.934963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.935004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.940041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.940270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.940310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.946370] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.946608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.946643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.950826] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.951051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.951083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.954390] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.954466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.954496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.957671] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.957882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.957910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.961350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.961545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.961582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.966186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.966280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.966311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.971963] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.972060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.972092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.977727] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.977813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.977847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.984631] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.984694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.984726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.989526] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.989595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.989626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.992853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.992912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.992938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.996068] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.996130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.996161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:15.999227] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:15.999283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:15.999310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:16.002365] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:16.002420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:16.002446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:16.005532] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:16.005595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:16.005625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:16.008746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:16.008833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:16.008861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:16.012725] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:16.012804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.985 [2024-05-15 00:45:16.012841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.985 [2024-05-15 00:45:16.015929] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.985 [2024-05-15 00:45:16.015984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.016010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.019453] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.019515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.019543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.022887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.022945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.022974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.026122] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.026177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.026205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.029217] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.029272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.029309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.032356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.032409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.032437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.035506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.035572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.035598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.038679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.038734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.038758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.041836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.041892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.041917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.044999] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.045059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.045088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.048604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.048685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.048713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.053615] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.053759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.053786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.059559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.059674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.059704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.065457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.065599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.065630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.072170] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.072328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.072359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.078045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.078140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.078167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.083917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.084004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.084041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.091373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.091469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.091496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.097335] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.097427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.097451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.103282] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.103434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.103460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.109835] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.109953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.109985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.117274] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.117392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.117421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.124991] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.125124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.125152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.131276] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.131363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.131395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.137251] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.137403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.137427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:49.986 [2024-05-15 00:45:16.143181] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:49.986 [2024-05-15 00:45:16.143261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:49.986 [2024-05-15 00:45:16.143295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.245 [2024-05-15 00:45:16.149185] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.149341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.149369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.155091] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.155187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.155213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.160117] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.160190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.160216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.163384] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.163444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.163472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.166604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.166659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.166688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.169732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.169787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.169812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.172890] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.172943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.172967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.176028] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.176081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.176109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.179183] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.179236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.179263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.182373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.182430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.182458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.185559] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.185659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.185687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.189516] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.189592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.189619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.195368] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.195455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.195485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.200689] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.200793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.200819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.207340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.207438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.207468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.213260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.213341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.213370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.217694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.217755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.217781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.221391] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.221444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.221469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.224501] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.224572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.224596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.227673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.227742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.227771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.230867] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.230922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.230959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.234049] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.234100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.246 [2024-05-15 00:45:16.234131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.246 [2024-05-15 00:45:16.237190] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.246 [2024-05-15 00:45:16.237242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.237273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.240599] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.240688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.240727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.244873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.244959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.244987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.250747] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.250834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.250868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.256065] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.256144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.256176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.263336] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.263456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.263486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.268744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.268807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.268836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.272830] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.272907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.272937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.276072] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.276132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.276159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.279280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.279343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.279374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.282478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.282538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.282580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.285679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.285735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.285768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.288874] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.288933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.288959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.292054] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.292108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.292144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.295280] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.295349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.295376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.298478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.298531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.298573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.301712] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.301778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.301807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.304892] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.304944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.304980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.308047] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.308117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.308154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.311266] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.311323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.311361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.314450] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.314509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.314539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.317646] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.317698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.317726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.320802] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.320856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.320882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.323964] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.324019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.324042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.327124] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.327177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.247 [2024-05-15 00:45:16.327201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.247 [2024-05-15 00:45:16.330249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.247 [2024-05-15 00:45:16.330312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.330338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.333641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.333710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.333738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.337776] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.337876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.337900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.343661] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.343756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.343789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.348303] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.348377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.348406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.352773] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.352841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.352867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.357876] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.357946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.357971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.364085] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.364150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.364179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.369205] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.369272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.369300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.373994] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.374045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.374073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.378887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.378945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.378972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.383741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.383852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.383880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.388223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.388277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.388305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.392672] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.392725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.392751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.397511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.397571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.397595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.248 [2024-05-15 00:45:16.402363] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.248 [2024-05-15 00:45:16.402432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.248 [2024-05-15 00:45:16.402466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.408403] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.408519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.408546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.413901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.413963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.413988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.419061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.419115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.419142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.423204] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.423264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.423292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.426652] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.426703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.426737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.430294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.430359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.430385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.434001] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.434068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.434096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.437688] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.437739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.437773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.441212] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.441264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.441294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.444373] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.444425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.444449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.447519] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.447575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.447602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.450678] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.450732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.450759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.454093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.454147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.454182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.457573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.457628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.457652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.460732] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.460784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.460808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.463865] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.463919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.463947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.466992] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.467046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.467071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.470168] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.470220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.470243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.473297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.473350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.473373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.476470] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.476523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.476548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.479683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.479738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.479765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.482823] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.482877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.482911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.486002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.486067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.486091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.507 [2024-05-15 00:45:16.489171] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.507 [2024-05-15 00:45:16.489231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.507 [2024-05-15 00:45:16.489256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.492321] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.492373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.492397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.495478] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.495532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.495559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.498614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.498666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.498691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.501810] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.501864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.501891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.505521] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.505607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.505633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.510481] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.510582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.510609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.516053] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.516141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.516168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.520346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.520438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.520464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.525594] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.525700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.525730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.530413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.530477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.530506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.534706] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.534764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.534792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.538350] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.538406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.538431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.542859] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.542920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.542945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.547007] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.547120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.547145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.552480] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.552588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.552621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.556988] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.557047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.557077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.560595] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.560684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.560709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.565084] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.565143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.565173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.568888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.568960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.568990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.572298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.572354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.572382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.575582] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.575658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.575694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.579486] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.508 [2024-05-15 00:45:16.579639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.508 [2024-05-15 00:45:16.579668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.508 [2024-05-15 00:45:16.585374] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.585471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.585500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.590728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.590807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.590833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.597699] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.597812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.597844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.603121] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.603189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.603221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.607386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.607444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.607473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.610616] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.610675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.610703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.613842] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.613903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.613933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.617736] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.617812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.617840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.622297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.622410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.622448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.628213] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.628290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.628320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.634192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.634261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.634288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.641052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.641205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.641234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.646414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.646493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.646533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.650023] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.650080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.650107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.653128] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.653182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.653211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.656298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.656349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.656375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.659414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.659468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.659498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.662581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.662634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.662660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.509 [2024-05-15 00:45:16.665767] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.509 [2024-05-15 00:45:16.665829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.509 [2024-05-15 00:45:16.665852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.768 [2024-05-15 00:45:16.669231] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.768 [2024-05-15 00:45:16.669286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.768 [2024-05-15 00:45:16.669311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.768 [2024-05-15 00:45:16.674298] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.768 [2024-05-15 00:45:16.674354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.768 [2024-05-15 00:45:16.674378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.768 [2024-05-15 00:45:16.678146] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.768 [2024-05-15 00:45:16.678200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.768 [2024-05-15 00:45:16.678230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.768 [2024-05-15 00:45:16.681564] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.768 [2024-05-15 00:45:16.681615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.768 [2024-05-15 00:45:16.681642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.768 [2024-05-15 00:45:16.685052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.768 [2024-05-15 00:45:16.685125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.768 [2024-05-15 00:45:16.685151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.768 [2024-05-15 00:45:16.688993] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.768 [2024-05-15 00:45:16.689064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.689088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.693884] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.693978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.694005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.699822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.699970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.700001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.705583] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.705777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.705803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.712192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.712363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.712393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.718417] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.718520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.718545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.726785] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.726936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.726975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.731359] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.731425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.731451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.734539] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.734598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.734624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.737592] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.737651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.737679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.740737] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.740813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.740840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.744322] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.744384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.744427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.748741] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.748804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.748830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.753058] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.753112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.753140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.756624] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.756679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.756706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.760260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.760347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.760372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.765000] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.765060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.765085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.768081] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.768133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.768156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.771315] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.771384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.771413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.775046] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.775122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.775148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.780687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.780758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.780787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.786469] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.786540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.786576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.794082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.794170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.794197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.799388] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.799465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.799491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.803570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.803633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.803658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.806738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.806796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.806824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.809853] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.809908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.809935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.812917] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.812969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.769 [2024-05-15 00:45:16.812994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.769 [2024-05-15 00:45:16.815947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.769 [2024-05-15 00:45:16.815997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.816044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.819041] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.819095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.819120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.822870] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.822991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.823019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.827627] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.827778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.827806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.833529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.833680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.833710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.839721] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.839830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.839857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.848066] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.848183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.848213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.854120] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.854183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.854209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.858923] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.858981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.859005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.863720] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.863794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.863829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.868628] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.868684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.868710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.873511] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.873619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.873648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.878386] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.878437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.878463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.883465] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.883523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.883561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.888333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.888393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.888426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.893125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.893193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.893221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.897858] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.897941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.897972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.902894] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.902947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.902979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.906629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.906684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.906711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.909794] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.909847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.909875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.913248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.913313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.913341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.916460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.916513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.916540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.919864] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.919938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.919962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.923704] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.923757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.923791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:50.770 [2024-05-15 00:45:16.927984] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:50.770 [2024-05-15 00:45:16.928055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:50.770 [2024-05-15 00:45:16.928083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.932989] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.933100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.933133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.938679] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.938758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.938783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.944464] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.944544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.944575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.951573] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.951692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.951722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.956328] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.956389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.956416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.959673] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.959729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.959771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.962806] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.962884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.962918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.965916] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.965970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.965995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.969077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.969133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.969166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.972777] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.972844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.972883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.976946] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.977013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.977049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.981560] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.981612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.981638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.985116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.985174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.985202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.988558] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.988621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.988646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.991901] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.991958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.992001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.995356] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.995413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.995439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:16.998577] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:16.998638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:16.998670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:17.001969] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:17.002045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:17.002073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:17.006506] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:17.006593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:17.006622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:17.012413] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:17.012571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:17.012599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:17.017967] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:17.018100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:17.018129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:17.024445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:17.024584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.041 [2024-05-15 00:45:17.024616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.041 [2024-05-15 00:45:17.030634] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.041 [2024-05-15 00:45:17.030741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.030767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.036676] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.036744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.036769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.042945] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.043010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.043036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.049133] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.049198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.049227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.055260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.055328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.055352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.061260] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.061322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.061349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.067425] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.067485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.067510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.073441] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.073508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.073532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.079529] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.079641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.079666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.085182] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.085246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.085274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.089597] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.089660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.089685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.093060] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.093120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.093144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.096319] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.096371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.096394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.099410] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.099474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.099496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.102579] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.102630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.102658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.105839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.105899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.105926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.110254] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.110326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.110351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.114271] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.114362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.114386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.119822] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.119876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.119902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.123973] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.124026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.124052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.127482] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.127539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.127576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.130950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.130998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.131024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.134249] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.134301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.134327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.137641] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.137693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.137717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.140756] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.140808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.140835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.143872] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.143922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.143946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.147002] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.147064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.147098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.151018] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.042 [2024-05-15 00:45:17.151131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.042 [2024-05-15 00:45:17.151155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.042 [2024-05-15 00:45:17.155158] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.043 [2024-05-15 00:45:17.155293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.043 [2024-05-15 00:45:17.155317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.043 [2024-05-15 00:45:17.159546] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.043 [2024-05-15 00:45:17.159606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.043 [2024-05-15 00:45:17.159634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.043 [2024-05-15 00:45:17.165187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.043 [2024-05-15 00:45:17.165299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.043 [2024-05-15 00:45:17.165331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.043 [2024-05-15 00:45:17.169089] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.043 [2024-05-15 00:45:17.169158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.043 [2024-05-15 00:45:17.169183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.043 [2024-05-15 00:45:17.172775] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.043 [2024-05-15 00:45:17.172834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.043 [2024-05-15 00:45:17.172859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.043 [2024-05-15 00:45:17.176309] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.043 [2024-05-15 00:45:17.176361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.043 [2024-05-15 00:45:17.176388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.043 [2024-05-15 00:45:17.179411] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.043 [2024-05-15 00:45:17.179463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.043 [2024-05-15 00:45:17.179488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.043 [2024-05-15 00:45:17.182543] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.043 [2024-05-15 00:45:17.182603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.043 [2024-05-15 00:45:17.182629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.043 [2024-05-15 00:45:17.185686] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.043 [2024-05-15 00:45:17.185741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.043 [2024-05-15 00:45:17.185774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.043 [2024-05-15 00:45:17.189152] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.043 [2024-05-15 00:45:17.189214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.043 [2024-05-15 00:45:17.189238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.043 [2024-05-15 00:45:17.193247] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.043 [2024-05-15 00:45:17.193307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.043 [2024-05-15 00:45:17.193332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.356 [2024-05-15 00:45:17.198203] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.356 [2024-05-15 00:45:17.198284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.356 [2024-05-15 00:45:17.198312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.356 [2024-05-15 00:45:17.203297] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.356 [2024-05-15 00:45:17.203367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.356 [2024-05-15 00:45:17.203399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.356 [2024-05-15 00:45:17.208981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.356 [2024-05-15 00:45:17.209057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.356 [2024-05-15 00:45:17.209091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.356 [2024-05-15 00:45:17.212888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.356 [2024-05-15 00:45:17.212949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.356 [2024-05-15 00:45:17.212979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.356 [2024-05-15 00:45:17.216038] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.356 [2024-05-15 00:45:17.216105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.356 [2024-05-15 00:45:17.216129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.356 [2024-05-15 00:45:17.219139] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.356 [2024-05-15 00:45:17.219208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.356 [2024-05-15 00:45:17.219230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.356 [2024-05-15 00:45:17.222243] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.356 [2024-05-15 00:45:17.222307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.356 [2024-05-15 00:45:17.222330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.356 [2024-05-15 00:45:17.225333] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.356 [2024-05-15 00:45:17.225402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.356 [2024-05-15 00:45:17.225426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.356 [2024-05-15 00:45:17.228387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.356 [2024-05-15 00:45:17.228453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.356 [2024-05-15 00:45:17.228482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.356 [2024-05-15 00:45:17.231724] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.356 [2024-05-15 00:45:17.231793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.356 [2024-05-15 00:45:17.231819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.356 [2024-05-15 00:45:17.236017] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.236100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.236125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.241896] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.241998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.242022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.247157] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.247265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.247296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.254248] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.254345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.254387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.259542] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.259618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.259646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.263376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.263439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.263466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.266873] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.266932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.266962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.270431] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.270489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.270516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.273930] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.273987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.274013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.277221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.277284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.277307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.280339] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.280391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.280414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.283455] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.283518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.283545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.286593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.286657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.286683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.290061] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.290116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.290145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.294346] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.294400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.294428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.298950] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.299008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.299044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.302387] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.302461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.302486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.305771] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.305823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.305855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.309273] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.309332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.309359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.312738] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.312795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.312819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.316186] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.316244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.316269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.320857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.320910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.320938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.324101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.324153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.357 [2024-05-15 00:45:17.324178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.357 [2024-05-15 00:45:17.327302] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.357 [2024-05-15 00:45:17.327414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.327441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.331045] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.331115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.331146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.336746] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.336926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.336952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.342110] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.342192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.342219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.348750] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.348856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.348881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.354700] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.354815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.354841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.359035] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.359103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.359151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.362244] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.362306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.362329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.365366] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.365430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.365459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.368847] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.368919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.368962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.372541] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.372606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.372637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.376891] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.377003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.377035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.380259] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.380319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.380357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.383663] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.383721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.383762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.387218] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.387288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.387335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.390895] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.390956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.390992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.394420] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.394532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.394586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.399021] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.399087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.399116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.402296] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.402361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.402386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.405626] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.405688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.405715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.409287] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.409371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.409404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.414951] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.415063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.415089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.419192] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.419250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.419276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.422792] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.422854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.422885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.426407] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.426469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.426494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.358 [2024-05-15 00:45:17.430078] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.358 [2024-05-15 00:45:17.430132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.358 [2024-05-15 00:45:17.430156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.433692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.433752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.433784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.437206] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.437264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.437293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.440681] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.440739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.440765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.443983] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.444048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.444079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.447682] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.447750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.447782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.450981] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.451038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.451074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.454191] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.454250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.454276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.457445] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.457498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.457525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.460839] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.460914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.460943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.464849] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.464913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.464942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.469318] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.469397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.469429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.474242] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.474305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.474330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.479651] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.479737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.479761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.483764] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.483816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.483848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.486961] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.487018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.487042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.490032] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.490088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.490110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.493187] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.493240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.493267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.496378] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.496429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.496455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.499534] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.499592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.499617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.502650] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.502713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.502736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.506077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.506147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.506170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.359 [2024-05-15 00:45:17.510692] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.359 [2024-05-15 00:45:17.510878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.359 [2024-05-15 00:45:17.510908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.516834] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.516920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.516948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.520662] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.520733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.520757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.525116] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.525196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.525222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.528502] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.528577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.528601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.531687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.531743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.531767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.534888] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.534941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.534967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.537982] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.538037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.538067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.541059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.541123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.541150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.544475] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.544533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.544567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.548748] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.548818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.548847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.551940] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.551996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.552021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.555112] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.555167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.555192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.558745] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.558814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.558845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.563444] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.563541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.563572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.569082] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.569185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.569209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.622 [2024-05-15 00:45:17.572656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.622 [2024-05-15 00:45:17.572748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.622 [2024-05-15 00:45:17.572776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.576223] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.576284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.576309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.580102] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.580193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.580218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.583804] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.583884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.583910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.589196] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.589296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.589321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.594751] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.594860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.594887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.600942] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.601026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.601058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.607093] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.607254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.607280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.612620] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.612718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.612750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.617221] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.617330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.617354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.622376] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.622527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.622560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.629052] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.629150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.629179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.634609] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.634708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.634733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.638304] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.638376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.638406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.641077] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.641132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.641171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.643900] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.643959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.644002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.646677] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.646733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.646757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.649537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.649600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.649624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.652757] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.652812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.652837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.656294] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.656349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.656377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.659105] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.659159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.659184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.661919] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.661985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.662008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.664769] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.664828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.664851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.623 [2024-05-15 00:45:17.667587] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.623 [2024-05-15 00:45:17.667641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.623 [2024-05-15 00:45:17.667672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.670422] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.670483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.670512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.673332] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.673401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.673425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.676165] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.676219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.676245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.679019] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.679096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.679118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.681857] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.681933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.681961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.684715] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.684779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.684804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.687537] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.687607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.687631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.690340] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.690394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.690418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.693148] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.693203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.693228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.696059] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.696118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.696148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.698947] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.699004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.699035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.701760] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.701829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.701853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.704614] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.704674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.704703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.707468] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.707524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.707548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.710284] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.710346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.710373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.713136] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.713193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.713226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.716011] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.716063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.716096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.718926] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.718984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.719009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.721694] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.721761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.721786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.724685] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.724737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.724760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.727570] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.727622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.727646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.730414] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.730475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.730497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.733341] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.733394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.624 [2024-05-15 00:45:17.733418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.624 [2024-05-15 00:45:17.736197] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.624 [2024-05-15 00:45:17.736260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.736289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.739101] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.739166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.739195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.741928] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.741996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.742020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.744759] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.744812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.744845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.747588] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.747659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.747682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.750436] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.750486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.750511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.753235] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.753290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.753319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.756131] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.756189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.756215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.759040] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.759100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.759131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.761922] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.761977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.762002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.764774] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.764827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.764860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.767638] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.767710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.767732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.770457] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.770522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.770547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.773392] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.773467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.773501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.776277] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.776329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.776353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.779209] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.779263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.779289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.625 [2024-05-15 00:45:17.782125] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.625 [2024-05-15 00:45:17.782227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.625 [2024-05-15 00:45:17.782252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.785887] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.785975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.786000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.788862] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.788927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.788951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.791612] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.791683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.791706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.794683] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.794780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.794811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.799103] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.799180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.799208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.804216] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.804335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.804366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.808836] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.808925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.808952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.814467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.814631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.814657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.819419] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.819583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.819609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.824460] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.824529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.824560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.829437] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.829606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.829638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.834467] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.834558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.834583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.839523] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.839621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.839651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.844604] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.844701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.844728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.849581] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.849664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.849693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.854585] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.854747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.854771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.859593] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.859695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.859720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.864630] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.864778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.864803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.869629] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.869733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.869758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.874656] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.874806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.874833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.879744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.879894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.879917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.886 [2024-05-15 00:45:17.884695] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.886 [2024-05-15 00:45:17.884847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.886 [2024-05-15 00:45:17.884870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.887 [2024-05-15 00:45:17.889718] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.887 [2024-05-15 00:45:17.889802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.887 [2024-05-15 00:45:17.889826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.887 [2024-05-15 00:45:17.894728] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.887 [2024-05-15 00:45:17.894885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.887 [2024-05-15 00:45:17.894909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.887 [2024-05-15 00:45:17.899687] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.887 [2024-05-15 00:45:17.899837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.887 [2024-05-15 00:45:17.899860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.887 [2024-05-15 00:45:17.904754] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.887 [2024-05-15 00:45:17.904916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.887 [2024-05-15 00:45:17.904941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:51.887 [2024-05-15 00:45:17.909697] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.887 [2024-05-15 00:45:17.909855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.887 [2024-05-15 00:45:17.909879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:51.887 [2024-05-15 00:45:17.914786] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.887 [2024-05-15 00:45:17.914947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.887 [2024-05-15 00:45:17.914974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:51.887 [2024-05-15 00:45:17.919744] tcp.c:2058:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:29:51.887 [2024-05-15 00:45:17.919901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:51.887 [2024-05-15 00:45:17.919929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:51.887 00:29:51.887 Latency(us) 00:29:51.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.887 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:51.887 nvme0n1 : 2.00 7454.58 931.82 0.00 0.00 2141.88 1276.23 12555.32 00:29:51.887 =================================================================================================================== 00:29:51.887 Total : 7454.58 931.82 0.00 0.00 2141.88 1276.23 12555.32 00:29:51.887 0 00:29:51.887 00:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:51.887 00:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:51.887 00:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:51.887 00:45:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:51.887 | .driver_specific 00:29:51.887 | .nvme_error 00:29:51.887 | .status_code 00:29:51.887 | .command_transient_transport_error' 00:29:52.145 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 481 > 0 )) 00:29:52.145 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2185084 00:29:52.145 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2185084 ']' 00:29:52.145 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2185084 00:29:52.145 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:29:52.145 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:52.145 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2185084 00:29:52.145 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:29:52.145 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:29:52.145 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2185084' 00:29:52.145 killing process with pid 2185084 00:29:52.145 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2185084 00:29:52.145 Received shutdown signal, test time was about 2.000000 seconds 00:29:52.145 00:29:52.145 Latency(us) 00:29:52.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:52.145 =================================================================================================================== 00:29:52.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:52.145 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2185084 00:29:52.403 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2182223 00:29:52.403 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@947 -- # '[' -z 2182223 ']' 00:29:52.403 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@951 -- # kill -0 2182223 00:29:52.403 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # uname 00:29:52.403 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:29:52.403 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2182223 00:29:52.403 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:29:52.403 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:29:52.403 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2182223' 00:29:52.403 killing process with pid 2182223 00:29:52.403 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # kill 2182223 00:29:52.403 00:45:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # wait 2182223 00:29:52.403 [2024-05-15 00:45:18.528078] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:29:52.967 00:29:52.967 real 0m17.127s 00:29:52.967 user 0m32.548s 00:29:52.967 sys 0m3.600s 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:52.967 ************************************ 00:29:52.967 END TEST nvmf_digest_error 00:29:52.967 ************************************ 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:52.967 rmmod nvme_tcp 00:29:52.967 rmmod nvme_fabrics 00:29:52.967 rmmod nvme_keyring 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2182223 ']' 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2182223 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@947 -- # '[' -z 2182223 ']' 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@951 -- # kill -0 2182223 00:29:52.967 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2182223) - No such process 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@974 -- # echo 'Process with pid 2182223 is not found' 00:29:52.967 Process with pid 2182223 is not found 00:29:52.967 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:52.968 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:52.968 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:52.968 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:52.968 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:52.968 00:45:19 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.968 00:45:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:52.968 00:45:19 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:55.499 00:45:21 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:55.499 00:29:55.499 real 1m37.897s 00:29:55.499 user 2m16.401s 00:29:55.499 sys 0m15.311s 00:29:55.499 00:45:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:55.499 00:45:21 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:55.499 ************************************ 00:29:55.499 END TEST nvmf_digest 00:29:55.499 ************************************ 00:29:55.499 00:45:21 nvmf_tcp -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:29:55.499 00:45:21 nvmf_tcp -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:29:55.499 00:45:21 nvmf_tcp -- nvmf/nvmf.sh@120 -- # [[ phy-fallback == phy ]] 00:29:55.499 00:45:21 nvmf_tcp -- nvmf/nvmf.sh@125 -- # timing_exit host 00:29:55.499 00:45:21 nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:55.499 00:45:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.499 00:45:21 nvmf_tcp -- nvmf/nvmf.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:29:55.499 00:29:55.499 real 18m26.392s 00:29:55.499 user 38m8.280s 00:29:55.499 sys 5m3.106s 00:29:55.499 00:45:21 nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:29:55.499 00:45:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.499 ************************************ 00:29:55.499 END TEST nvmf_tcp 00:29:55.499 ************************************ 00:29:55.499 00:45:21 -- spdk/autotest.sh@284 -- # [[ 0 -eq 0 ]] 00:29:55.499 00:45:21 -- spdk/autotest.sh@285 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:55.499 00:45:21 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:29:55.499 00:45:21 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:29:55.499 00:45:21 -- common/autotest_common.sh@10 -- # set +x 00:29:55.499 ************************************ 00:29:55.499 START TEST spdkcli_nvmf_tcp 00:29:55.499 ************************************ 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:29:55.499 * Looking for test storage... 00:29:55.499 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:55.499 00:45:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2186432 00:29:55.500 00:45:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2186432 00:29:55.500 00:45:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@828 -- # '[' -z 2186432 ']' 00:29:55.500 00:45:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.500 00:45:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local max_retries=100 00:29:55.500 00:45:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.500 00:45:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # xtrace_disable 00:29:55.500 00:45:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:55.500 00:45:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:55.500 [2024-05-15 00:45:21.431323] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:29:55.500 [2024-05-15 00:45:21.431439] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186432 ] 00:29:55.500 EAL: No free 2048 kB hugepages reported on node 1 00:29:55.500 [2024-05-15 00:45:21.559673] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:55.758 [2024-05-15 00:45:21.662772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.758 [2024-05-15 00:45:21.662795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:56.016 00:45:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:29:56.016 00:45:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@861 -- # return 0 00:29:56.016 00:45:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:56.016 00:45:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:29:56.016 00:45:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:56.274 00:45:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:56.274 00:45:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:29:56.274 00:45:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:56.274 00:45:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:29:56.274 00:45:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:56.274 00:45:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:56.274 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:56.274 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:56.274 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:56.274 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:56.274 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:56.274 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:56.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:56.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:56.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:56.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:56.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:56.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:56.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:56.274 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:56.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:56.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:29:56.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:56.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:56.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:56.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:56.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:56.274 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:29:56.275 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:29:56.275 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:56.275 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:56.275 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:56.275 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:56.275 ' 00:29:58.804 [2024-05-15 00:45:24.544561] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:59.739 [2024-05-15 00:45:25.701886] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:29:59.739 [2024-05-15 00:45:25.702195] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:02.269 [2024-05-15 00:45:27.832648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:03.643 [2024-05-15 00:45:29.662938] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:05.022 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:05.022 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:05.022 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:05.022 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:05.022 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:05.022 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:05.022 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:05.022 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:05.022 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:05.022 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:05.022 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:05.022 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:05.280 00:45:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:05.280 00:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:05.280 00:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.280 00:45:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:05.280 00:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:05.280 00:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.280 00:45:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:05.280 00:45:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:05.537 00:45:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:05.537 00:45:31 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:05.537 00:45:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:05.537 00:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:05.537 00:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.537 00:45:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:05.537 00:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:05.537 00:45:31 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:05.537 00:45:31 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:05.537 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:05.537 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:05.537 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:05.537 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:05.537 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:05.537 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:05.537 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:05.537 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:05.537 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:05.537 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:05.537 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:05.537 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:05.537 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:05.537 ' 00:30:10.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:10.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:10.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:10.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:10.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:10.805 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:10.805 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:10.805 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:10.805 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:10.805 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:10.805 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:10.805 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:10.805 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:10.805 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:10.805 00:45:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:10.805 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:10.805 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:10.805 00:45:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2186432 00:30:10.805 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 2186432 ']' 00:30:10.805 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 2186432 00:30:10.805 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # uname 00:30:10.805 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:10.805 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2186432 00:30:10.805 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:10.805 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:10.805 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2186432' 00:30:10.805 killing process with pid 2186432 00:30:10.805 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # kill 2186432 00:30:10.806 [2024-05-15 00:45:36.648292] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:10.806 00:45:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # wait 2186432 00:30:11.065 00:45:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:11.065 00:45:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:11.065 00:45:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2186432 ']' 00:30:11.065 00:45:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2186432 00:30:11.065 00:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@947 -- # '[' -z 2186432 ']' 00:30:11.065 00:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@951 -- # kill -0 2186432 00:30:11.065 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2186432) - No such process 00:30:11.065 00:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # echo 'Process with pid 2186432 is not found' 00:30:11.065 Process with pid 2186432 is not found 00:30:11.065 00:45:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:11.065 00:45:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:11.065 00:45:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:11.065 00:30:11.065 real 0m15.865s 00:30:11.065 user 0m32.026s 00:30:11.065 sys 0m0.790s 00:30:11.065 00:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:11.065 00:45:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:11.065 ************************************ 00:30:11.065 END TEST spdkcli_nvmf_tcp 00:30:11.065 ************************************ 00:30:11.065 00:45:37 -- spdk/autotest.sh@286 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:11.065 00:45:37 -- common/autotest_common.sh@1098 -- # '[' 3 -le 1 ']' 00:30:11.065 00:45:37 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:11.065 00:45:37 -- common/autotest_common.sh@10 -- # set +x 00:30:11.065 ************************************ 00:30:11.065 START TEST nvmf_identify_passthru 00:30:11.065 ************************************ 00:30:11.065 00:45:37 nvmf_identify_passthru -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:11.324 * Looking for test storage... 00:30:11.324 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:30:11.324 00:45:37 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:11.324 00:45:37 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.324 00:45:37 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.324 00:45:37 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.324 00:45:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.324 00:45:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.324 00:45:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.324 00:45:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:11.324 00:45:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:11.324 00:45:37 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:11.324 00:45:37 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.324 00:45:37 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.324 00:45:37 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.324 00:45:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.324 00:45:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.324 00:45:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.324 00:45:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:11.324 00:45:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.324 00:45:37 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.324 00:45:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:11.324 00:45:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:11.324 00:45:37 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:11.324 00:45:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:30:16.594 Found 0000:27:00.0 (0x8086 - 0x159b) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:30:16.594 Found 0000:27:00.1 (0x8086 - 0x159b) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:30:16.594 Found net devices under 0000:27:00.0: cvl_0_0 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:30:16.594 Found net devices under 0000:27:00.1: cvl_0_1 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:16.594 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.595 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.595 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.595 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:16.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.713 ms 00:30:16.595 00:30:16.595 --- 10.0.0.2 ping statistics --- 00:30:16.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.595 rtt min/avg/max/mdev = 0.713/0.713/0.713/0.000 ms 00:30:16.595 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:30:16.595 00:30:16.595 --- 10.0.0.1 ping statistics --- 00:30:16.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.595 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:30:16.595 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.595 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:16.595 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:16.595 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.595 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:16.595 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:16.595 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.595 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:16.595 00:45:42 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:16.854 00:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:16.855 00:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # bdfs=() 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1521 -- # local bdfs 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # bdfs=($(get_nvme_bdfs)) 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1522 -- # get_nvme_bdfs 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=() 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # local bdfs 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1511 -- # jq -r '.config[].params.traddr' 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # (( 2 == 0 )) 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1516 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:30:16.855 00:45:42 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # echo 0000:c9:00.0 00:30:16.855 00:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:c9:00.0 00:30:16.855 00:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:c9:00.0 ']' 00:30:16.855 00:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:c9:00.0' -i 0 00:30:16.855 00:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:16.855 00:45:42 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:16.855 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.131 00:45:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ9413009R2P0BGN 00:30:22.131 00:45:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:c9:00.0' -i 0 00:30:22.131 00:45:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:22.131 00:45:48 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:22.131 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.407 00:45:53 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:27.407 00:45:53 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:27.407 00:45:53 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:27.407 00:45:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:27.407 00:45:53 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:27.407 00:45:53 nvmf_identify_passthru -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:27.407 00:45:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:27.407 00:45:53 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2195136 00:30:27.407 00:45:53 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:27.407 00:45:53 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2195136 00:30:27.407 00:45:53 nvmf_identify_passthru -- common/autotest_common.sh@828 -- # '[' -z 2195136 ']' 00:30:27.407 00:45:53 nvmf_identify_passthru -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:27.407 00:45:53 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:27.407 00:45:53 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:27.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:27.407 00:45:53 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:27.407 00:45:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:27.407 00:45:53 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:27.407 [2024-05-15 00:45:53.445119] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:30:27.407 [2024-05-15 00:45:53.445227] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:27.407 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.407 [2024-05-15 00:45:53.565348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:27.665 [2024-05-15 00:45:53.665878] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:27.665 [2024-05-15 00:45:53.665918] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:27.665 [2024-05-15 00:45:53.665927] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:27.665 [2024-05-15 00:45:53.665936] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:27.665 [2024-05-15 00:45:53.665944] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:27.665 [2024-05-15 00:45:53.666138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:27.665 [2024-05-15 00:45:53.666223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:27.665 [2024-05-15 00:45:53.666354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.665 [2024-05-15 00:45:53.666363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:28.300 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:28.300 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@861 -- # return 0 00:30:28.301 00:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:28.301 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.301 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:28.301 INFO: Log level set to 20 00:30:28.301 INFO: Requests: 00:30:28.301 { 00:30:28.301 "jsonrpc": "2.0", 00:30:28.301 "method": "nvmf_set_config", 00:30:28.301 "id": 1, 00:30:28.301 "params": { 00:30:28.301 "admin_cmd_passthru": { 00:30:28.301 "identify_ctrlr": true 00:30:28.301 } 00:30:28.301 } 00:30:28.301 } 00:30:28.301 00:30:28.301 INFO: response: 00:30:28.301 { 00:30:28.301 "jsonrpc": "2.0", 00:30:28.301 "id": 1, 00:30:28.301 "result": true 00:30:28.301 } 00:30:28.301 00:30:28.301 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.301 00:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:28.301 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.301 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:28.301 INFO: Setting log level to 20 00:30:28.301 INFO: Setting log level to 20 00:30:28.301 INFO: Log level set to 20 00:30:28.301 INFO: Log level set to 20 00:30:28.301 INFO: Requests: 00:30:28.301 { 00:30:28.301 "jsonrpc": "2.0", 00:30:28.301 "method": "framework_start_init", 00:30:28.301 "id": 1 00:30:28.301 } 00:30:28.301 00:30:28.301 INFO: Requests: 00:30:28.301 { 00:30:28.301 "jsonrpc": "2.0", 00:30:28.301 "method": "framework_start_init", 00:30:28.301 "id": 1 00:30:28.301 } 00:30:28.301 00:30:28.301 [2024-05-15 00:45:54.298585] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:28.301 INFO: response: 00:30:28.301 { 00:30:28.301 "jsonrpc": "2.0", 00:30:28.301 "id": 1, 00:30:28.301 "result": true 00:30:28.301 } 00:30:28.301 00:30:28.301 INFO: response: 00:30:28.301 { 00:30:28.301 "jsonrpc": "2.0", 00:30:28.301 "id": 1, 00:30:28.301 "result": true 00:30:28.301 } 00:30:28.301 00:30:28.301 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.301 00:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:28.301 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.301 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:28.301 INFO: Setting log level to 40 00:30:28.301 INFO: Setting log level to 40 00:30:28.301 INFO: Setting log level to 40 00:30:28.301 [2024-05-15 00:45:54.312970] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.301 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:28.301 00:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:28.301 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:28.301 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:28.301 00:45:54 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:c9:00.0 00:30:28.301 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:28.301 00:45:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:31.612 Nvme0n1 00:30:31.612 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.612 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:31.612 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.612 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:31.612 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.612 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:31.612 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.612 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:31.612 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.612 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:31.612 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.612 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:31.612 [2024-05-15 00:45:57.227233] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:31.612 [2024-05-15 00:45:57.227561] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:31.612 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.612 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:31.613 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.613 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:31.613 [ 00:30:31.613 { 00:30:31.613 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:31.613 "subtype": "Discovery", 00:30:31.613 "listen_addresses": [], 00:30:31.613 "allow_any_host": true, 00:30:31.613 "hosts": [] 00:30:31.613 }, 00:30:31.613 { 00:30:31.613 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:31.613 "subtype": "NVMe", 00:30:31.613 "listen_addresses": [ 00:30:31.613 { 00:30:31.613 "trtype": "TCP", 00:30:31.613 "adrfam": "IPv4", 00:30:31.613 "traddr": "10.0.0.2", 00:30:31.613 "trsvcid": "4420" 00:30:31.613 } 00:30:31.613 ], 00:30:31.613 "allow_any_host": true, 00:30:31.613 "hosts": [], 00:30:31.613 "serial_number": "SPDK00000000000001", 00:30:31.613 "model_number": "SPDK bdev Controller", 00:30:31.613 "max_namespaces": 1, 00:30:31.613 "min_cntlid": 1, 00:30:31.613 "max_cntlid": 65519, 00:30:31.613 "namespaces": [ 00:30:31.613 { 00:30:31.613 "nsid": 1, 00:30:31.613 "bdev_name": "Nvme0n1", 00:30:31.613 "name": "Nvme0n1", 00:30:31.613 "nguid": "DEBC8AA4553342E7A474177CB23E9195", 00:30:31.613 "uuid": "debc8aa4-5533-42e7-a474-177cb23e9195" 00:30:31.613 } 00:30:31.613 ] 00:30:31.613 } 00:30:31.613 ] 00:30:31.613 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.613 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:31.613 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:31.613 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:31.613 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.613 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ9413009R2P0BGN 00:30:31.613 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:31.613 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:31.613 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:31.613 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.870 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:31.870 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ9413009R2P0BGN '!=' PHLJ9413009R2P0BGN ']' 00:30:31.870 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:31.870 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:31.870 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:31.870 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:31.870 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:31.870 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:31.870 00:45:57 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:31.870 00:45:57 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:31.870 00:45:57 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:31.870 00:45:57 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:31.870 00:45:57 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:31.870 00:45:57 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:31.870 00:45:57 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:31.870 rmmod nvme_tcp 00:30:31.870 rmmod nvme_fabrics 00:30:31.870 rmmod nvme_keyring 00:30:31.870 00:45:57 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:31.870 00:45:57 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:31.870 00:45:57 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:31.870 00:45:57 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2195136 ']' 00:30:31.870 00:45:57 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2195136 00:30:31.870 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@947 -- # '[' -z 2195136 ']' 00:30:31.870 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@951 -- # kill -0 2195136 00:30:31.870 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # uname 00:30:31.870 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:30:31.870 00:45:57 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2195136 00:30:31.870 00:45:58 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:30:31.870 00:45:58 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:30:31.870 00:45:58 nvmf_identify_passthru -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2195136' 00:30:31.870 killing process with pid 2195136 00:30:31.870 00:45:58 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # kill 2195136 00:30:31.870 [2024-05-15 00:45:58.020694] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:30:31.870 00:45:58 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # wait 2195136 00:30:35.160 00:46:00 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:35.160 00:46:00 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:35.160 00:46:00 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:35.160 00:46:00 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:35.160 00:46:00 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:35.160 00:46:00 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.160 00:46:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:35.160 00:46:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.062 00:46:02 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:37.062 00:30:37.062 real 0m25.605s 00:30:37.062 user 0m37.146s 00:30:37.062 sys 0m5.200s 00:30:37.062 00:46:02 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:37.062 00:46:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:37.062 ************************************ 00:30:37.062 END TEST nvmf_identify_passthru 00:30:37.062 ************************************ 00:30:37.062 00:46:02 -- spdk/autotest.sh@288 -- # run_test nvmf_dif /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:37.062 00:46:02 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:30:37.062 00:46:02 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:37.062 00:46:02 -- common/autotest_common.sh@10 -- # set +x 00:30:37.062 ************************************ 00:30:37.062 START TEST nvmf_dif 00:30:37.062 ************************************ 00:30:37.062 00:46:02 nvmf_dif -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:37.062 * Looking for test storage... 00:30:37.062 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:30:37.062 00:46:02 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:30:37.062 00:46:02 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.062 00:46:02 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.062 00:46:02 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.062 00:46:02 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.062 00:46:02 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.062 00:46:02 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.062 00:46:02 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:30:37.062 00:46:02 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:37.062 00:46:02 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:30:37.062 00:46:02 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:37.062 00:46:02 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:37.062 00:46:02 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:30:37.062 00:46:02 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.062 00:46:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:37.062 00:46:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:37.062 00:46:02 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:30:37.062 00:46:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:30:42.328 Found 0000:27:00.0 (0x8086 - 0x159b) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:30:42.328 Found 0000:27:00.1 (0x8086 - 0x159b) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:30:42.328 Found net devices under 0000:27:00.0: cvl_0_0 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:30:42.328 Found net devices under 0000:27:00.1: cvl_0_1 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.328 00:46:07 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.329 00:46:07 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:42.329 00:46:07 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.329 00:46:07 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.329 00:46:07 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:42.329 00:46:07 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:42.329 00:46:07 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.329 00:46:07 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.329 00:46:08 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.329 00:46:08 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.329 00:46:08 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:42.329 00:46:08 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.329 00:46:08 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.329 00:46:08 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.329 00:46:08 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:42.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:30:42.329 00:30:42.329 --- 10.0.0.2 ping statistics --- 00:30:42.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.329 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:30:42.329 00:46:08 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:30:42.329 00:30:42.329 --- 10.0.0.1 ping statistics --- 00:30:42.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.329 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:30:42.329 00:46:08 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.329 00:46:08 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:30:42.329 00:46:08 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:42.329 00:46:08 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:30:44.860 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:30:44.860 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:44.860 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:30:44.860 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:30:44.860 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:30:44.860 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:30:44.860 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:30:44.861 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:30:44.861 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:30:44.861 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:30:44.861 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:30:44.861 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:30:44.861 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:30:44.861 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:30:44.861 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:44.861 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:30:44.861 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:30:44.861 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:30:44.861 00:46:10 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.861 00:46:10 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:44.861 00:46:10 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:44.861 00:46:10 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.861 00:46:10 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:44.861 00:46:10 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:44.861 00:46:11 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:44.861 00:46:11 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:30:44.861 00:46:11 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:44.861 00:46:11 nvmf_dif -- common/autotest_common.sh@721 -- # xtrace_disable 00:30:44.861 00:46:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:44.861 00:46:11 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2201715 00:30:44.861 00:46:11 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2201715 00:30:44.861 00:46:11 nvmf_dif -- common/autotest_common.sh@828 -- # '[' -z 2201715 ']' 00:30:44.861 00:46:11 nvmf_dif -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.861 00:46:11 nvmf_dif -- common/autotest_common.sh@833 -- # local max_retries=100 00:30:44.861 00:46:11 nvmf_dif -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.861 00:46:11 nvmf_dif -- common/autotest_common.sh@837 -- # xtrace_disable 00:30:44.861 00:46:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:44.861 00:46:11 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:45.121 [2024-05-15 00:46:11.085220] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:30:45.121 [2024-05-15 00:46:11.085316] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:45.121 EAL: No free 2048 kB hugepages reported on node 1 00:30:45.121 [2024-05-15 00:46:11.202971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:45.382 [2024-05-15 00:46:11.299874] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:45.382 [2024-05-15 00:46:11.299909] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:45.382 [2024-05-15 00:46:11.299918] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:45.382 [2024-05-15 00:46:11.299928] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:45.382 [2024-05-15 00:46:11.299935] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:45.382 [2024-05-15 00:46:11.299966] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.642 00:46:11 nvmf_dif -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:30:45.642 00:46:11 nvmf_dif -- common/autotest_common.sh@861 -- # return 0 00:30:45.642 00:46:11 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:45.642 00:46:11 nvmf_dif -- common/autotest_common.sh@727 -- # xtrace_disable 00:30:45.642 00:46:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:45.642 00:46:11 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.642 00:46:11 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:30:45.642 00:46:11 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:45.642 00:46:11 nvmf_dif -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.642 00:46:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:45.642 [2024-05-15 00:46:11.801819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:45.902 00:46:11 nvmf_dif -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.902 00:46:11 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:45.902 00:46:11 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:30:45.902 00:46:11 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:45.902 00:46:11 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:45.902 ************************************ 00:30:45.902 START TEST fio_dif_1_default 00:30:45.902 ************************************ 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1122 -- # fio_dif_1 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:45.902 bdev_null0 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:45.902 [2024-05-15 00:46:11.869782] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:30:45.902 [2024-05-15 00:46:11.870045] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1336 -- # local sanitizers 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1338 -- # shift 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local asan_lib= 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:45.902 { 00:30:45.902 "params": { 00:30:45.902 "name": "Nvme$subsystem", 00:30:45.902 "trtype": "$TEST_TRANSPORT", 00:30:45.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:45.902 "adrfam": "ipv4", 00:30:45.902 "trsvcid": "$NVMF_PORT", 00:30:45.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:45.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:45.902 "hdgst": ${hdgst:-false}, 00:30:45.902 "ddgst": ${ddgst:-false} 00:30:45.902 }, 00:30:45.902 "method": "bdev_nvme_attach_controller" 00:30:45.902 } 00:30:45.902 EOF 00:30:45.902 )") 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # grep libasan 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:30:45.902 00:46:11 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:45.902 "params": { 00:30:45.902 "name": "Nvme0", 00:30:45.902 "trtype": "tcp", 00:30:45.902 "traddr": "10.0.0.2", 00:30:45.902 "adrfam": "ipv4", 00:30:45.902 "trsvcid": "4420", 00:30:45.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:45.903 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:45.903 "hdgst": false, 00:30:45.903 "ddgst": false 00:30:45.903 }, 00:30:45.903 "method": "bdev_nvme_attach_controller" 00:30:45.903 }' 00:30:45.903 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:45.903 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:45.903 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # break 00:30:45.903 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:45.903 00:46:11 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:46.470 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:46.470 fio-3.35 00:30:46.470 Starting 1 thread 00:30:46.470 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.667 00:30:58.667 filename0: (groupid=0, jobs=1): err= 0: pid=2202193: Wed May 15 00:46:23 2024 00:30:58.667 read: IOPS=189, BW=758KiB/s (776kB/s)(7600KiB/10031msec) 00:30:58.667 slat (nsec): min=6043, max=48873, avg=7420.66, stdev=2832.27 00:30:58.667 clat (usec): min=394, max=42535, avg=21097.17, stdev=20515.47 00:30:58.667 lat (usec): min=400, max=42541, avg=21104.59, stdev=20514.99 00:30:58.667 clat percentiles (usec): 00:30:58.667 | 1.00th=[ 420], 5.00th=[ 433], 10.00th=[ 437], 20.00th=[ 445], 00:30:58.667 | 30.00th=[ 453], 40.00th=[ 486], 50.00th=[40633], 60.00th=[41157], 00:30:58.667 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:30:58.667 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:30:58.667 | 99.99th=[42730] 00:30:58.667 bw ( KiB/s): min= 702, max= 768, per=100.00%, avg=758.30, stdev=23.69, samples=20 00:30:58.667 iops : min= 175, max= 192, avg=189.55, stdev= 5.99, samples=20 00:30:58.667 lat (usec) : 500=44.26%, 750=5.42% 00:30:58.667 lat (msec) : 50=50.32% 00:30:58.667 cpu : usr=96.07%, sys=3.60%, ctx=23, majf=0, minf=1634 00:30:58.667 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:58.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.667 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:58.667 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:58.667 latency : target=0, window=0, percentile=100.00%, depth=4 00:30:58.667 00:30:58.667 Run status group 0 (all jobs): 00:30:58.667 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7600KiB (7782kB), run=10031-10031msec 00:30:58.667 ----------------------------------------------------- 00:30:58.667 Suppressions used: 00:30:58.667 count bytes template 00:30:58.667 1 8 /usr/src/fio/parse.c 00:30:58.667 1 8 libtcmalloc_minimal.so 00:30:58.667 1 904 libcrypto.so 00:30:58.667 ----------------------------------------------------- 00:30:58.667 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.667 00:30:58.667 real 0m11.703s 00:30:58.667 user 0m25.402s 00:30:58.667 sys 0m0.797s 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # xtrace_disable 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:30:58.667 ************************************ 00:30:58.667 END TEST fio_dif_1_default 00:30:58.667 ************************************ 00:30:58.667 00:46:23 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:30:58.667 00:46:23 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:30:58.667 00:46:23 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:30:58.667 00:46:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:58.667 ************************************ 00:30:58.667 START TEST fio_dif_1_multi_subsystems 00:30:58.667 ************************************ 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1122 -- # fio_dif_1_multi_subsystems 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.667 bdev_null0 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.667 [2024-05-15 00:46:23.646418] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.667 bdev_null1 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:58.667 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:58.668 { 00:30:58.668 "params": { 00:30:58.668 "name": "Nvme$subsystem", 00:30:58.668 "trtype": "$TEST_TRANSPORT", 00:30:58.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.668 "adrfam": "ipv4", 00:30:58.668 "trsvcid": "$NVMF_PORT", 00:30:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.668 "hdgst": ${hdgst:-false}, 00:30:58.668 "ddgst": ${ddgst:-false} 00:30:58.668 }, 00:30:58.668 "method": "bdev_nvme_attach_controller" 00:30:58.668 } 00:30:58.668 EOF 00:30:58.668 )") 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1336 -- # local sanitizers 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1338 -- # shift 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local asan_lib= 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # grep libasan 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:58.668 { 00:30:58.668 "params": { 00:30:58.668 "name": "Nvme$subsystem", 00:30:58.668 "trtype": "$TEST_TRANSPORT", 00:30:58.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:58.668 "adrfam": "ipv4", 00:30:58.668 "trsvcid": "$NVMF_PORT", 00:30:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:58.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:58.668 "hdgst": ${hdgst:-false}, 00:30:58.668 "ddgst": ${ddgst:-false} 00:30:58.668 }, 00:30:58.668 "method": "bdev_nvme_attach_controller" 00:30:58.668 } 00:30:58.668 EOF 00:30:58.668 )") 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:58.668 "params": { 00:30:58.668 "name": "Nvme0", 00:30:58.668 "trtype": "tcp", 00:30:58.668 "traddr": "10.0.0.2", 00:30:58.668 "adrfam": "ipv4", 00:30:58.668 "trsvcid": "4420", 00:30:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:58.668 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:58.668 "hdgst": false, 00:30:58.668 "ddgst": false 00:30:58.668 }, 00:30:58.668 "method": "bdev_nvme_attach_controller" 00:30:58.668 },{ 00:30:58.668 "params": { 00:30:58.668 "name": "Nvme1", 00:30:58.668 "trtype": "tcp", 00:30:58.668 "traddr": "10.0.0.2", 00:30:58.668 "adrfam": "ipv4", 00:30:58.668 "trsvcid": "4420", 00:30:58.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:58.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:58.668 "hdgst": false, 00:30:58.668 "ddgst": false 00:30:58.668 }, 00:30:58.668 "method": "bdev_nvme_attach_controller" 00:30:58.668 }' 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # break 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:58.668 00:46:23 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.668 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:58.668 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:58.668 fio-3.35 00:30:58.668 Starting 2 threads 00:30:58.668 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.852 00:31:10.852 filename0: (groupid=0, jobs=1): err= 0: pid=2204719: Wed May 15 00:46:34 2024 00:31:10.852 read: IOPS=97, BW=390KiB/s (400kB/s)(3904KiB/10005msec) 00:31:10.852 slat (nsec): min=6049, max=34156, avg=7699.27, stdev=2305.80 00:31:10.852 clat (usec): min=40767, max=41346, avg=40979.70, stdev=70.33 00:31:10.852 lat (usec): min=40773, max=41380, avg=40987.40, stdev=70.39 00:31:10.852 clat percentiles (usec): 00:31:10.852 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:10.852 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:10.852 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:10.852 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:10.852 | 99.99th=[41157] 00:31:10.852 bw ( KiB/s): min= 384, max= 416, per=49.62%, avg=388.80, stdev=11.72, samples=20 00:31:10.852 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:31:10.852 lat (msec) : 50=100.00% 00:31:10.852 cpu : usr=98.23%, sys=1.47%, ctx=14, majf=0, minf=1634 00:31:10.852 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.852 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.852 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:10.853 filename1: (groupid=0, jobs=1): err= 0: pid=2204720: Wed May 15 00:46:34 2024 00:31:10.853 read: IOPS=97, BW=392KiB/s (401kB/s)(3920KiB/10005msec) 00:31:10.853 slat (nsec): min=5985, max=29695, avg=7748.21, stdev=2586.89 00:31:10.853 clat (usec): min=418, max=41750, avg=40813.85, stdev=2587.67 00:31:10.853 lat (usec): min=425, max=41780, avg=40821.59, stdev=2587.46 00:31:10.853 clat percentiles (usec): 00:31:10.853 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:10.853 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:10.853 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:10.853 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:31:10.853 | 99.99th=[41681] 00:31:10.853 bw ( KiB/s): min= 384, max= 448, per=49.87%, avg=390.40, stdev=16.74, samples=20 00:31:10.853 iops : min= 96, max= 112, avg=97.60, stdev= 4.19, samples=20 00:31:10.853 lat (usec) : 500=0.41% 00:31:10.853 lat (msec) : 50=99.59% 00:31:10.853 cpu : usr=98.31%, sys=1.39%, ctx=14, majf=0, minf=1632 00:31:10.853 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:10.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:10.853 issued rwts: total=980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:10.853 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:10.853 00:31:10.853 Run status group 0 (all jobs): 00:31:10.853 READ: bw=782KiB/s (801kB/s), 390KiB/s-392KiB/s (400kB/s-401kB/s), io=7824KiB (8012kB), run=10005-10005msec 00:31:10.853 ----------------------------------------------------- 00:31:10.853 Suppressions used: 00:31:10.853 count bytes template 00:31:10.853 2 16 /usr/src/fio/parse.c 00:31:10.853 1 8 libtcmalloc_minimal.so 00:31:10.853 1 904 libcrypto.so 00:31:10.853 ----------------------------------------------------- 00:31:10.853 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.853 00:31:10.853 real 0m11.896s 00:31:10.853 user 0m35.385s 00:31:10.853 sys 0m0.698s 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:10.853 00:46:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:10.853 ************************************ 00:31:10.853 END TEST fio_dif_1_multi_subsystems 00:31:10.853 ************************************ 00:31:10.853 00:46:35 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:10.853 00:46:35 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:31:10.853 00:46:35 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:10.853 00:46:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:10.853 ************************************ 00:31:10.853 START TEST fio_dif_rand_params 00:31:10.853 ************************************ 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1122 -- # fio_dif_rand_params 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.853 bdev_null0 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:10.853 [2024-05-15 00:46:35.603236] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:10.853 { 00:31:10.853 "params": { 00:31:10.853 "name": "Nvme$subsystem", 00:31:10.853 "trtype": "$TEST_TRANSPORT", 00:31:10.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:10.853 "adrfam": "ipv4", 00:31:10.853 "trsvcid": "$NVMF_PORT", 00:31:10.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:10.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:10.853 "hdgst": ${hdgst:-false}, 00:31:10.853 "ddgst": ${ddgst:-false} 00:31:10.853 }, 00:31:10.853 "method": "bdev_nvme_attach_controller" 00:31:10.853 } 00:31:10.853 EOF 00:31:10.853 )") 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:31:10.853 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:31:10.854 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:10.854 00:46:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:10.854 00:46:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:10.854 00:46:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:10.854 00:46:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:10.854 "params": { 00:31:10.854 "name": "Nvme0", 00:31:10.854 "trtype": "tcp", 00:31:10.854 "traddr": "10.0.0.2", 00:31:10.854 "adrfam": "ipv4", 00:31:10.854 "trsvcid": "4420", 00:31:10.854 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:10.854 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:10.854 "hdgst": false, 00:31:10.854 "ddgst": false 00:31:10.854 }, 00:31:10.854 "method": "bdev_nvme_attach_controller" 00:31:10.854 }' 00:31:10.854 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:10.854 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:10.854 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # break 00:31:10.854 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:10.854 00:46:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:10.854 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:10.854 ... 00:31:10.854 fio-3.35 00:31:10.854 Starting 3 threads 00:31:10.854 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.114 00:31:16.114 filename0: (groupid=0, jobs=1): err= 0: pid=2207249: Wed May 15 00:46:41 2024 00:31:16.114 read: IOPS=289, BW=36.2MiB/s (38.0MB/s)(181MiB/5004msec) 00:31:16.114 slat (nsec): min=6117, max=32029, avg=8687.69, stdev=2207.54 00:31:16.114 clat (usec): min=3773, max=52336, avg=10346.93, stdev=9825.14 00:31:16.114 lat (usec): min=3780, max=52344, avg=10355.62, stdev=9825.14 00:31:16.114 clat percentiles (usec): 00:31:16.114 | 1.00th=[ 4621], 5.00th=[ 5735], 10.00th=[ 6128], 20.00th=[ 6980], 00:31:16.114 | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 8356], 00:31:16.114 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[46924], 00:31:16.114 | 99.00th=[49546], 99.50th=[50070], 99.90th=[51643], 99.95th=[52167], 00:31:16.114 | 99.99th=[52167] 00:31:16.114 bw ( KiB/s): min=24832, max=46080, per=31.97%, avg=37043.20, stdev=8000.47, samples=10 00:31:16.114 iops : min= 194, max= 360, avg=289.40, stdev=62.50, samples=10 00:31:16.114 lat (msec) : 4=0.28%, 10=91.30%, 20=2.21%, 50=5.87%, 100=0.35% 00:31:16.114 cpu : usr=97.22%, sys=2.48%, ctx=10, majf=0, minf=1634 00:31:16.114 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:16.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.114 issued rwts: total=1449,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.114 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:16.114 filename0: (groupid=0, jobs=1): err= 0: pid=2207250: Wed May 15 00:46:41 2024 00:31:16.114 read: IOPS=292, BW=36.6MiB/s (38.4MB/s)(185MiB/5043msec) 00:31:16.114 slat (nsec): min=6155, max=25656, avg=8807.14, stdev=2330.99 00:31:16.114 clat (usec): min=2946, max=52586, avg=10210.47, stdev=7357.47 00:31:16.114 lat (usec): min=2953, max=52593, avg=10219.28, stdev=7357.68 00:31:16.114 clat percentiles (usec): 00:31:16.114 | 1.00th=[ 3458], 5.00th=[ 5080], 10.00th=[ 5932], 20.00th=[ 6390], 00:31:16.114 | 30.00th=[ 7111], 40.00th=[ 8586], 50.00th=[ 9503], 60.00th=[10159], 00:31:16.114 | 70.00th=[10945], 80.00th=[11469], 90.00th=[12256], 95.00th=[13042], 00:31:16.114 | 99.00th=[49546], 99.50th=[50594], 99.90th=[52167], 99.95th=[52691], 00:31:16.114 | 99.99th=[52691] 00:31:16.115 bw ( KiB/s): min=27904, max=45568, per=32.57%, avg=37734.40, stdev=5107.47, samples=10 00:31:16.115 iops : min= 218, max= 356, avg=294.80, stdev=39.90, samples=10 00:31:16.115 lat (msec) : 4=2.78%, 10=55.28%, 20=38.75%, 50=2.51%, 100=0.68% 00:31:16.115 cpu : usr=97.01%, sys=2.72%, ctx=7, majf=0, minf=1637 00:31:16.115 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:16.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.115 issued rwts: total=1476,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.115 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:16.115 filename0: (groupid=0, jobs=1): err= 0: pid=2207251: Wed May 15 00:46:41 2024 00:31:16.115 read: IOPS=325, BW=40.7MiB/s (42.6MB/s)(205MiB/5044msec) 00:31:16.115 slat (nsec): min=5730, max=29058, avg=8940.39, stdev=2277.97 00:31:16.115 clat (usec): min=2966, max=87091, avg=9185.22, stdev=6877.94 00:31:16.115 lat (usec): min=2972, max=87098, avg=9194.16, stdev=6878.01 00:31:16.115 clat percentiles (usec): 00:31:16.115 | 1.00th=[ 3326], 5.00th=[ 3589], 10.00th=[ 5014], 20.00th=[ 5932], 00:31:16.115 | 30.00th=[ 6325], 40.00th=[ 7439], 50.00th=[ 8979], 60.00th=[ 9503], 00:31:16.115 | 70.00th=[10159], 80.00th=[10945], 90.00th=[11600], 95.00th=[12256], 00:31:16.115 | 99.00th=[47973], 99.50th=[50070], 99.90th=[85459], 99.95th=[87557], 00:31:16.115 | 99.99th=[87557] 00:31:16.115 bw ( KiB/s): min=29440, max=54637, per=36.20%, avg=41943.70, stdev=8913.83, samples=10 00:31:16.115 iops : min= 230, max= 426, avg=327.60, stdev=69.50, samples=10 00:31:16.115 lat (msec) : 4=8.78%, 10=59.11%, 20=29.98%, 50=1.65%, 100=0.49% 00:31:16.115 cpu : usr=97.03%, sys=2.70%, ctx=7, majf=0, minf=1632 00:31:16.115 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:16.115 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.115 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:16.115 issued rwts: total=1641,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:16.115 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:16.115 00:31:16.115 Run status group 0 (all jobs): 00:31:16.115 READ: bw=113MiB/s (119MB/s), 36.2MiB/s-40.7MiB/s (38.0MB/s-42.6MB/s), io=571MiB (598MB), run=5004-5044msec 00:31:16.373 ----------------------------------------------------- 00:31:16.373 Suppressions used: 00:31:16.373 count bytes template 00:31:16.373 5 44 /usr/src/fio/parse.c 00:31:16.373 1 8 libtcmalloc_minimal.so 00:31:16.373 1 904 libcrypto.so 00:31:16.373 ----------------------------------------------------- 00:31:16.373 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.373 bdev_null0 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:16.373 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.374 [2024-05-15 00:46:42.512269] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.374 bdev_null1 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.374 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.632 bdev_null2 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.632 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:16.633 { 00:31:16.633 "params": { 00:31:16.633 "name": "Nvme$subsystem", 00:31:16.633 "trtype": "$TEST_TRANSPORT", 00:31:16.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.633 "adrfam": "ipv4", 00:31:16.633 "trsvcid": "$NVMF_PORT", 00:31:16.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.633 "hdgst": ${hdgst:-false}, 00:31:16.633 "ddgst": ${ddgst:-false} 00:31:16.633 }, 00:31:16.633 "method": "bdev_nvme_attach_controller" 00:31:16.633 } 00:31:16.633 EOF 00:31:16.633 )") 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:16.633 { 00:31:16.633 "params": { 00:31:16.633 "name": "Nvme$subsystem", 00:31:16.633 "trtype": "$TEST_TRANSPORT", 00:31:16.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.633 "adrfam": "ipv4", 00:31:16.633 "trsvcid": "$NVMF_PORT", 00:31:16.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.633 "hdgst": ${hdgst:-false}, 00:31:16.633 "ddgst": ${ddgst:-false} 00:31:16.633 }, 00:31:16.633 "method": "bdev_nvme_attach_controller" 00:31:16.633 } 00:31:16.633 EOF 00:31:16.633 )") 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:16.633 { 00:31:16.633 "params": { 00:31:16.633 "name": "Nvme$subsystem", 00:31:16.633 "trtype": "$TEST_TRANSPORT", 00:31:16.633 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:16.633 "adrfam": "ipv4", 00:31:16.633 "trsvcid": "$NVMF_PORT", 00:31:16.633 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:16.633 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:16.633 "hdgst": ${hdgst:-false}, 00:31:16.633 "ddgst": ${ddgst:-false} 00:31:16.633 }, 00:31:16.633 "method": "bdev_nvme_attach_controller" 00:31:16.633 } 00:31:16.633 EOF 00:31:16.633 )") 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:16.633 "params": { 00:31:16.633 "name": "Nvme0", 00:31:16.633 "trtype": "tcp", 00:31:16.633 "traddr": "10.0.0.2", 00:31:16.633 "adrfam": "ipv4", 00:31:16.633 "trsvcid": "4420", 00:31:16.633 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:16.633 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:16.633 "hdgst": false, 00:31:16.633 "ddgst": false 00:31:16.633 }, 00:31:16.633 "method": "bdev_nvme_attach_controller" 00:31:16.633 },{ 00:31:16.633 "params": { 00:31:16.633 "name": "Nvme1", 00:31:16.633 "trtype": "tcp", 00:31:16.633 "traddr": "10.0.0.2", 00:31:16.633 "adrfam": "ipv4", 00:31:16.633 "trsvcid": "4420", 00:31:16.633 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:16.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:16.633 "hdgst": false, 00:31:16.633 "ddgst": false 00:31:16.633 }, 00:31:16.633 "method": "bdev_nvme_attach_controller" 00:31:16.633 },{ 00:31:16.633 "params": { 00:31:16.633 "name": "Nvme2", 00:31:16.633 "trtype": "tcp", 00:31:16.633 "traddr": "10.0.0.2", 00:31:16.633 "adrfam": "ipv4", 00:31:16.633 "trsvcid": "4420", 00:31:16.633 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:16.633 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:16.633 "hdgst": false, 00:31:16.633 "ddgst": false 00:31:16.633 }, 00:31:16.633 "method": "bdev_nvme_attach_controller" 00:31:16.633 }' 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # break 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:16.633 00:46:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:16.927 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:16.927 ... 00:31:16.927 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:16.927 ... 00:31:16.927 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:16.927 ... 00:31:16.927 fio-3.35 00:31:16.927 Starting 24 threads 00:31:16.927 EAL: No free 2048 kB hugepages reported on node 1 00:31:29.172 00:31:29.172 filename0: (groupid=0, jobs=1): err= 0: pid=2208881: Wed May 15 00:46:54 2024 00:31:29.172 read: IOPS=504, BW=2017KiB/s (2065kB/s)(19.8MiB/10028msec) 00:31:29.172 slat (usec): min=4, max=144, avg=19.94, stdev=17.56 00:31:29.172 clat (usec): min=6267, max=39467, avg=31587.81, stdev=2214.43 00:31:29.172 lat (usec): min=6274, max=39478, avg=31607.76, stdev=2214.41 00:31:29.172 clat percentiles (usec): 00:31:29.172 | 1.00th=[27395], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:31:29.172 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:29.172 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:31:29.172 | 99.00th=[34866], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:31:29.172 | 99.99th=[39584] 00:31:29.172 bw ( KiB/s): min= 1920, max= 2176, per=4.20%, avg=2016.00, stdev=70.42, samples=20 00:31:29.172 iops : min= 480, max= 544, avg=504.00, stdev=17.60, samples=20 00:31:29.172 lat (msec) : 10=0.63%, 20=0.32%, 50=99.05% 00:31:29.172 cpu : usr=98.56%, sys=0.80%, ctx=59, majf=0, minf=1634 00:31:29.172 IO depths : 1=5.3%, 2=11.6%, 4=25.0%, 8=50.9%, 16=7.2%, 32=0.0%, >=64=0.0% 00:31:29.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.172 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.172 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.172 filename0: (groupid=0, jobs=1): err= 0: pid=2208882: Wed May 15 00:46:54 2024 00:31:29.172 read: IOPS=499, BW=1998KiB/s (2046kB/s)(19.6MiB/10020msec) 00:31:29.172 slat (usec): min=3, max=136, avg=31.76, stdev=14.94 00:31:29.172 clat (usec): min=19072, max=66078, avg=31776.03, stdev=1486.91 00:31:29.172 lat (usec): min=19081, max=66099, avg=31807.79, stdev=1484.75 00:31:29.172 clat percentiles (usec): 00:31:29.172 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31327], 20.00th=[31327], 00:31:29.172 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:31:29.172 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:31:29.172 | 99.00th=[34866], 99.50th=[35390], 99.90th=[53740], 99.95th=[53740], 00:31:29.172 | 99.99th=[66323] 00:31:29.172 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1996.80, stdev=76.58, samples=20 00:31:29.172 iops : min= 448, max= 512, avg=499.20, stdev=19.14, samples=20 00:31:29.172 lat (msec) : 20=0.08%, 50=99.60%, 100=0.32% 00:31:29.172 cpu : usr=98.95%, sys=0.63%, ctx=36, majf=0, minf=1631 00:31:29.172 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:29.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.172 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.172 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.172 filename0: (groupid=0, jobs=1): err= 0: pid=2208883: Wed May 15 00:46:54 2024 00:31:29.172 read: IOPS=499, BW=2000KiB/s (2048kB/s)(19.6MiB/10018msec) 00:31:29.172 slat (usec): min=3, max=105, avg=33.74, stdev=14.83 00:31:29.172 clat (usec): min=19961, max=60750, avg=31692.88, stdev=1805.32 00:31:29.172 lat (usec): min=19972, max=60770, avg=31726.63, stdev=1804.53 00:31:29.172 clat percentiles (usec): 00:31:29.172 | 1.00th=[31065], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:31:29.172 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.172 | 70.00th=[31851], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:31:29.172 | 99.00th=[32900], 99.50th=[34866], 99.90th=[60556], 99.95th=[60556], 00:31:29.172 | 99.99th=[60556] 00:31:29.172 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1995.95, stdev=76.07, samples=20 00:31:29.172 iops : min= 448, max= 512, avg=498.95, stdev=19.00, samples=20 00:31:29.172 lat (msec) : 20=0.06%, 50=99.62%, 100=0.32% 00:31:29.172 cpu : usr=98.98%, sys=0.62%, ctx=23, majf=0, minf=1632 00:31:29.172 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:29.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.172 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.172 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.172 filename0: (groupid=0, jobs=1): err= 0: pid=2208884: Wed May 15 00:46:54 2024 00:31:29.172 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10010msec) 00:31:29.172 slat (nsec): min=6672, max=98176, avg=36261.49, stdev=15333.36 00:31:29.172 clat (usec): min=17682, max=57533, avg=31632.07, stdev=1738.53 00:31:29.172 lat (usec): min=17690, max=57559, avg=31668.33, stdev=1738.08 00:31:29.172 clat percentiles (usec): 00:31:29.172 | 1.00th=[30802], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:31:29.172 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.172 | 70.00th=[31589], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:31:29.172 | 99.00th=[32900], 99.50th=[34866], 99.90th=[57410], 99.95th=[57410], 00:31:29.172 | 99.99th=[57410] 00:31:29.172 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1995.70, stdev=75.47, samples=20 00:31:29.172 iops : min= 448, max= 512, avg=498.85, stdev=18.96, samples=20 00:31:29.172 lat (msec) : 20=0.36%, 50=99.32%, 100=0.32% 00:31:29.172 cpu : usr=99.05%, sys=0.55%, ctx=13, majf=0, minf=1634 00:31:29.172 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:29.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.172 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.172 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.172 filename0: (groupid=0, jobs=1): err= 0: pid=2208885: Wed May 15 00:46:54 2024 00:31:29.172 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.6MiB/10022msec) 00:31:29.172 slat (usec): min=5, max=125, avg=34.39, stdev=16.80 00:31:29.172 clat (usec): min=30132, max=54512, avg=31721.72, stdev=1342.70 00:31:29.172 lat (usec): min=30146, max=54551, avg=31756.11, stdev=1341.22 00:31:29.172 clat percentiles (usec): 00:31:29.172 | 1.00th=[31065], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:31:29.172 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.172 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:31:29.172 | 99.00th=[32900], 99.50th=[34866], 99.90th=[54264], 99.95th=[54264], 00:31:29.172 | 99.99th=[54264] 00:31:29.172 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1996.95, stdev=76.15, samples=20 00:31:29.172 iops : min= 448, max= 512, avg=499.20, stdev=19.14, samples=20 00:31:29.172 lat (msec) : 50=99.68%, 100=0.32% 00:31:29.172 cpu : usr=98.99%, sys=0.58%, ctx=32, majf=0, minf=1636 00:31:29.172 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:29.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.172 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.172 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.172 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.172 filename0: (groupid=0, jobs=1): err= 0: pid=2208886: Wed May 15 00:46:54 2024 00:31:29.172 read: IOPS=501, BW=2005KiB/s (2053kB/s)(19.6MiB/10002msec) 00:31:29.172 slat (usec): min=6, max=143, avg=35.56, stdev=15.66 00:31:29.172 clat (usec): min=18153, max=65180, avg=31626.09, stdev=1724.21 00:31:29.172 lat (usec): min=18164, max=65212, avg=31661.66, stdev=1723.63 00:31:29.172 clat percentiles (usec): 00:31:29.172 | 1.00th=[24773], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:31:29.172 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.172 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:31:29.172 | 99.00th=[34341], 99.50th=[39060], 99.90th=[48497], 99.95th=[48497], 00:31:29.172 | 99.99th=[65274] 00:31:29.172 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=2003.53, stdev=60.85, samples=19 00:31:29.172 iops : min= 480, max= 512, avg=500.84, stdev=15.24, samples=19 00:31:29.172 lat (msec) : 20=0.40%, 50=99.56%, 100=0.04% 00:31:29.172 cpu : usr=98.94%, sys=0.65%, ctx=17, majf=0, minf=1632 00:31:29.173 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.5%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 issued rwts: total=5014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.173 filename0: (groupid=0, jobs=1): err= 0: pid=2208887: Wed May 15 00:46:54 2024 00:31:29.173 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10015msec) 00:31:29.173 slat (usec): min=5, max=111, avg=37.49, stdev=15.15 00:31:29.173 clat (usec): min=17564, max=74674, avg=31667.01, stdev=2031.89 00:31:29.173 lat (usec): min=17575, max=74701, avg=31704.50, stdev=2030.76 00:31:29.173 clat percentiles (usec): 00:31:29.173 | 1.00th=[30802], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:31:29.173 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.173 | 70.00th=[31851], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:31:29.173 | 99.00th=[32900], 99.50th=[35390], 99.90th=[62129], 99.95th=[62129], 00:31:29.173 | 99.99th=[74974] 00:31:29.173 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1996.95, stdev=76.15, samples=20 00:31:29.173 iops : min= 448, max= 512, avg=499.20, stdev=19.14, samples=20 00:31:29.173 lat (msec) : 20=0.36%, 50=99.32%, 100=0.32% 00:31:29.173 cpu : usr=98.70%, sys=0.67%, ctx=42, majf=0, minf=1634 00:31:29.173 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.173 filename0: (groupid=0, jobs=1): err= 0: pid=2208888: Wed May 15 00:46:54 2024 00:31:29.173 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10011msec) 00:31:29.173 slat (usec): min=6, max=104, avg=36.92, stdev=14.24 00:31:29.173 clat (usec): min=17621, max=58399, avg=31644.79, stdev=1743.46 00:31:29.173 lat (usec): min=17637, max=58426, avg=31681.71, stdev=1742.53 00:31:29.173 clat percentiles (usec): 00:31:29.173 | 1.00th=[31065], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:31:29.173 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.173 | 70.00th=[31851], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:31:29.173 | 99.00th=[32637], 99.50th=[34866], 99.90th=[58459], 99.95th=[58459], 00:31:29.173 | 99.99th=[58459] 00:31:29.173 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1995.55, stdev=75.90, samples=20 00:31:29.173 iops : min= 448, max= 512, avg=498.85, stdev=18.96, samples=20 00:31:29.173 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:31:29.173 cpu : usr=99.04%, sys=0.57%, ctx=19, majf=0, minf=1634 00:31:29.173 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.173 filename1: (groupid=0, jobs=1): err= 0: pid=2208889: Wed May 15 00:46:54 2024 00:31:29.173 read: IOPS=504, BW=2020KiB/s (2068kB/s)(19.8MiB/10014msec) 00:31:29.173 slat (nsec): min=4312, max=77795, avg=17582.06, stdev=14063.48 00:31:29.173 clat (usec): min=5311, max=36918, avg=31537.92, stdev=2287.27 00:31:29.173 lat (usec): min=5320, max=36935, avg=31555.50, stdev=2288.01 00:31:29.173 clat percentiles (usec): 00:31:29.173 | 1.00th=[22938], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:29.173 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:29.173 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:31:29.173 | 99.00th=[32637], 99.50th=[33817], 99.90th=[36963], 99.95th=[36963], 00:31:29.173 | 99.99th=[36963] 00:31:29.173 bw ( KiB/s): min= 1920, max= 2304, per=4.20%, avg=2016.00, stdev=91.69, samples=20 00:31:29.173 iops : min= 480, max= 576, avg=504.00, stdev=22.92, samples=20 00:31:29.173 lat (msec) : 10=0.59%, 20=0.36%, 50=99.05% 00:31:29.173 cpu : usr=98.63%, sys=0.74%, ctx=43, majf=0, minf=1636 00:31:29.173 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.173 filename1: (groupid=0, jobs=1): err= 0: pid=2208890: Wed May 15 00:46:54 2024 00:31:29.173 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10016msec) 00:31:29.173 slat (usec): min=5, max=154, avg=36.68, stdev=17.98 00:31:29.173 clat (usec): min=30206, max=48532, avg=31649.61, stdev=1030.62 00:31:29.173 lat (usec): min=30233, max=48565, avg=31686.28, stdev=1030.10 00:31:29.173 clat percentiles (usec): 00:31:29.173 | 1.00th=[31065], 5.00th=[31065], 10.00th=[31065], 20.00th=[31327], 00:31:29.173 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.173 | 70.00th=[31851], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:31:29.173 | 99.00th=[32900], 99.50th=[35390], 99.90th=[48497], 99.95th=[48497], 00:31:29.173 | 99.99th=[48497] 00:31:29.173 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1996.95, stdev=64.15, samples=20 00:31:29.173 iops : min= 480, max= 512, avg=499.20, stdev=16.08, samples=20 00:31:29.173 lat (msec) : 50=100.00% 00:31:29.173 cpu : usr=98.17%, sys=1.01%, ctx=104, majf=0, minf=1636 00:31:29.173 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.173 filename1: (groupid=0, jobs=1): err= 0: pid=2208891: Wed May 15 00:46:54 2024 00:31:29.173 read: IOPS=499, BW=1998KiB/s (2046kB/s)(19.6MiB/10028msec) 00:31:29.173 slat (usec): min=5, max=109, avg=16.15, stdev=14.67 00:31:29.173 clat (usec): min=30517, max=62217, avg=31917.91, stdev=1750.39 00:31:29.173 lat (usec): min=30542, max=62247, avg=31934.06, stdev=1749.17 00:31:29.173 clat percentiles (usec): 00:31:29.173 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:29.173 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:29.173 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:31:29.173 | 99.00th=[32637], 99.50th=[35390], 99.90th=[62129], 99.95th=[62129], 00:31:29.173 | 99.99th=[62129] 00:31:29.173 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1996.95, stdev=76.15, samples=20 00:31:29.173 iops : min= 448, max= 512, avg=499.20, stdev=19.14, samples=20 00:31:29.173 lat (msec) : 50=99.68%, 100=0.32% 00:31:29.173 cpu : usr=98.22%, sys=1.06%, ctx=99, majf=0, minf=1633 00:31:29.173 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.173 filename1: (groupid=0, jobs=1): err= 0: pid=2208892: Wed May 15 00:46:54 2024 00:31:29.173 read: IOPS=499, BW=1998KiB/s (2046kB/s)(19.6MiB/10027msec) 00:31:29.173 slat (usec): min=5, max=106, avg=23.62, stdev=16.94 00:31:29.173 clat (usec): min=30193, max=60670, avg=31863.78, stdev=1671.45 00:31:29.173 lat (usec): min=30245, max=60697, avg=31887.40, stdev=1669.39 00:31:29.173 clat percentiles (usec): 00:31:29.173 | 1.00th=[31065], 5.00th=[31327], 10.00th=[31327], 20.00th=[31589], 00:31:29.173 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:29.173 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:31:29.173 | 99.00th=[32900], 99.50th=[35390], 99.90th=[60556], 99.95th=[60556], 00:31:29.173 | 99.99th=[60556] 00:31:29.173 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1996.80, stdev=76.58, samples=20 00:31:29.173 iops : min= 448, max= 512, avg=499.20, stdev=19.14, samples=20 00:31:29.173 lat (msec) : 50=99.68%, 100=0.32% 00:31:29.173 cpu : usr=98.26%, sys=0.96%, ctx=156, majf=0, minf=1636 00:31:29.173 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:29.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.173 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.173 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.173 filename1: (groupid=0, jobs=1): err= 0: pid=2208893: Wed May 15 00:46:54 2024 00:31:29.173 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10014msec) 00:31:29.173 slat (usec): min=6, max=122, avg=35.42, stdev=17.80 00:31:29.173 clat (usec): min=19978, max=57137, avg=31645.11, stdev=1622.76 00:31:29.173 lat (usec): min=19987, max=57163, avg=31680.53, stdev=1622.64 00:31:29.173 clat percentiles (usec): 00:31:29.173 | 1.00th=[31065], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:31:29.173 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.173 | 70.00th=[31589], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:31:29.174 | 99.00th=[32637], 99.50th=[35390], 99.90th=[56886], 99.95th=[56886], 00:31:29.174 | 99.99th=[56886] 00:31:29.174 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1996.80, stdev=76.58, samples=20 00:31:29.174 iops : min= 448, max= 512, avg=499.20, stdev=19.14, samples=20 00:31:29.174 lat (msec) : 20=0.06%, 50=99.62%, 100=0.32% 00:31:29.174 cpu : usr=98.69%, sys=0.67%, ctx=56, majf=0, minf=1634 00:31:29.174 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:29.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.174 filename1: (groupid=0, jobs=1): err= 0: pid=2208894: Wed May 15 00:46:54 2024 00:31:29.174 read: IOPS=504, BW=2017KiB/s (2065kB/s)(19.8MiB/10028msec) 00:31:29.174 slat (usec): min=5, max=131, avg=31.70, stdev=20.23 00:31:29.174 clat (usec): min=6427, max=36208, avg=31484.55, stdev=2152.27 00:31:29.174 lat (usec): min=6436, max=36217, avg=31516.25, stdev=2153.08 00:31:29.174 clat percentiles (usec): 00:31:29.174 | 1.00th=[28967], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:31:29.174 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:31:29.174 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:31:29.174 | 99.00th=[32637], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:31:29.174 | 99.99th=[36439] 00:31:29.174 bw ( KiB/s): min= 1920, max= 2176, per=4.20%, avg=2016.00, stdev=70.42, samples=20 00:31:29.174 iops : min= 480, max= 544, avg=504.00, stdev=17.60, samples=20 00:31:29.174 lat (msec) : 10=0.63%, 20=0.32%, 50=99.05% 00:31:29.174 cpu : usr=99.01%, sys=0.60%, ctx=13, majf=0, minf=1632 00:31:29.174 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:29.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 issued rwts: total=5056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.174 filename1: (groupid=0, jobs=1): err= 0: pid=2208895: Wed May 15 00:46:54 2024 00:31:29.174 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.6MiB/10020msec) 00:31:29.174 slat (usec): min=4, max=114, avg=36.39, stdev=16.93 00:31:29.174 clat (usec): min=19956, max=63233, avg=31668.11, stdev=1934.75 00:31:29.174 lat (usec): min=19976, max=63255, avg=31704.50, stdev=1934.25 00:31:29.174 clat percentiles (usec): 00:31:29.174 | 1.00th=[31065], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:31:29.174 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.174 | 70.00th=[31851], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:31:29.174 | 99.00th=[32900], 99.50th=[35390], 99.90th=[63177], 99.95th=[63177], 00:31:29.174 | 99.99th=[63177] 00:31:29.174 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1996.80, stdev=76.58, samples=20 00:31:29.174 iops : min= 448, max= 512, avg=499.20, stdev=19.14, samples=20 00:31:29.174 lat (msec) : 20=0.06%, 50=99.62%, 100=0.32% 00:31:29.174 cpu : usr=98.77%, sys=0.70%, ctx=116, majf=0, minf=1634 00:31:29.174 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:29.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.174 filename1: (groupid=0, jobs=1): err= 0: pid=2208896: Wed May 15 00:46:54 2024 00:31:29.174 read: IOPS=499, BW=2000KiB/s (2048kB/s)(19.6MiB/10018msec) 00:31:29.174 slat (usec): min=4, max=121, avg=37.25, stdev=17.12 00:31:29.174 clat (usec): min=30336, max=48819, avg=31656.43, stdev=1041.41 00:31:29.174 lat (usec): min=30346, max=48841, avg=31693.68, stdev=1040.60 00:31:29.174 clat percentiles (usec): 00:31:29.174 | 1.00th=[31065], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:31:29.174 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.174 | 70.00th=[31851], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:31:29.174 | 99.00th=[32900], 99.50th=[34866], 99.90th=[49021], 99.95th=[49021], 00:31:29.174 | 99.99th=[49021] 00:31:29.174 bw ( KiB/s): min= 1920, max= 2048, per=4.16%, avg=1996.80, stdev=64.34, samples=20 00:31:29.174 iops : min= 480, max= 512, avg=499.20, stdev=16.08, samples=20 00:31:29.174 lat (msec) : 50=100.00% 00:31:29.174 cpu : usr=98.90%, sys=0.57%, ctx=76, majf=0, minf=1634 00:31:29.174 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:29.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.174 filename2: (groupid=0, jobs=1): err= 0: pid=2208897: Wed May 15 00:46:54 2024 00:31:29.174 read: IOPS=499, BW=1996KiB/s (2044kB/s)(19.5MiB/10002msec) 00:31:29.174 slat (nsec): min=5675, max=79782, avg=17421.03, stdev=14507.33 00:31:29.174 clat (usec): min=17897, max=78782, avg=31870.22, stdev=2131.22 00:31:29.174 lat (usec): min=17908, max=78814, avg=31887.65, stdev=2131.06 00:31:29.174 clat percentiles (usec): 00:31:29.174 | 1.00th=[31327], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:29.174 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:31:29.174 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:31:29.174 | 99.00th=[33817], 99.50th=[36439], 99.90th=[66323], 99.95th=[66323], 00:31:29.174 | 99.99th=[79168] 00:31:29.174 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1994.26, stdev=77.26, samples=19 00:31:29.174 iops : min= 448, max= 512, avg=498.53, stdev=19.42, samples=19 00:31:29.174 lat (msec) : 20=0.08%, 50=99.60%, 100=0.32% 00:31:29.174 cpu : usr=98.89%, sys=0.64%, ctx=56, majf=0, minf=1636 00:31:29.174 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:29.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.174 filename2: (groupid=0, jobs=1): err= 0: pid=2208898: Wed May 15 00:46:54 2024 00:31:29.174 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10010msec) 00:31:29.174 slat (usec): min=6, max=113, avg=36.13, stdev=15.41 00:31:29.174 clat (usec): min=17670, max=57495, avg=31646.27, stdev=1722.53 00:31:29.174 lat (usec): min=17686, max=57523, avg=31682.39, stdev=1721.92 00:31:29.174 clat percentiles (usec): 00:31:29.174 | 1.00th=[30802], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:31:29.174 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.174 | 70.00th=[31589], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:31:29.174 | 99.00th=[32900], 99.50th=[35390], 99.90th=[57410], 99.95th=[57410], 00:31:29.174 | 99.99th=[57410] 00:31:29.174 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1995.70, stdev=75.47, samples=20 00:31:29.174 iops : min= 448, max= 512, avg=498.85, stdev=18.96, samples=20 00:31:29.174 lat (msec) : 20=0.28%, 50=99.36%, 100=0.36% 00:31:29.174 cpu : usr=98.59%, sys=0.81%, ctx=55, majf=0, minf=1633 00:31:29.174 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:29.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.174 issued rwts: total=5006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.174 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.174 filename2: (groupid=0, jobs=1): err= 0: pid=2208899: Wed May 15 00:46:54 2024 00:31:29.174 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10014msec) 00:31:29.174 slat (usec): min=6, max=124, avg=35.23, stdev=16.71 00:31:29.174 clat (usec): min=19987, max=57198, avg=31656.33, stdev=1624.82 00:31:29.174 lat (usec): min=20002, max=57224, avg=31691.57, stdev=1624.50 00:31:29.174 clat percentiles (usec): 00:31:29.174 | 1.00th=[31065], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:31:29.174 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.174 | 70.00th=[31851], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:31:29.174 | 99.00th=[32900], 99.50th=[34866], 99.90th=[57410], 99.95th=[57410], 00:31:29.174 | 99.99th=[57410] 00:31:29.174 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1996.80, stdev=76.58, samples=20 00:31:29.174 iops : min= 448, max= 512, avg=499.20, stdev=19.14, samples=20 00:31:29.174 lat (msec) : 20=0.02%, 50=99.66%, 100=0.32% 00:31:29.174 cpu : usr=98.56%, sys=0.78%, ctx=91, majf=0, minf=1635 00:31:29.175 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:29.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.175 filename2: (groupid=0, jobs=1): err= 0: pid=2208900: Wed May 15 00:46:54 2024 00:31:29.175 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10012msec) 00:31:29.175 slat (nsec): min=4282, max=90813, avg=36371.94, stdev=14326.88 00:31:29.175 clat (usec): min=17696, max=59130, avg=31642.28, stdev=1777.11 00:31:29.175 lat (usec): min=17707, max=59151, avg=31678.65, stdev=1776.45 00:31:29.175 clat percentiles (usec): 00:31:29.175 | 1.00th=[31065], 5.00th=[31065], 10.00th=[31327], 20.00th=[31327], 00:31:29.175 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.175 | 70.00th=[31589], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:31:29.175 | 99.00th=[32900], 99.50th=[35390], 99.90th=[58983], 99.95th=[58983], 00:31:29.175 | 99.99th=[58983] 00:31:29.175 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1995.55, stdev=75.90, samples=20 00:31:29.175 iops : min= 448, max= 512, avg=498.85, stdev=18.96, samples=20 00:31:29.175 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:31:29.175 cpu : usr=98.45%, sys=0.85%, ctx=100, majf=0, minf=1636 00:31:29.175 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:29.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.175 filename2: (groupid=0, jobs=1): err= 0: pid=2208901: Wed May 15 00:46:54 2024 00:31:29.175 read: IOPS=499, BW=1998KiB/s (2046kB/s)(19.6MiB/10026msec) 00:31:29.175 slat (nsec): min=5907, max=82548, avg=18135.56, stdev=10822.47 00:31:29.175 clat (usec): min=27392, max=58339, avg=31894.31, stdev=1628.38 00:31:29.175 lat (usec): min=27401, max=58380, avg=31912.44, stdev=1627.79 00:31:29.175 clat percentiles (usec): 00:31:29.175 | 1.00th=[30540], 5.00th=[31327], 10.00th=[31589], 20.00th=[31589], 00:31:29.175 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:29.175 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:31:29.175 | 99.00th=[35914], 99.50th=[35914], 99.90th=[58459], 99.95th=[58459], 00:31:29.175 | 99.99th=[58459] 00:31:29.175 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1996.80, stdev=75.33, samples=20 00:31:29.175 iops : min= 448, max= 512, avg=499.20, stdev=18.83, samples=20 00:31:29.175 lat (msec) : 50=99.68%, 100=0.32% 00:31:29.175 cpu : usr=98.91%, sys=0.70%, ctx=17, majf=0, minf=1637 00:31:29.175 IO depths : 1=5.2%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.3%, 32=0.0%, >=64=0.0% 00:31:29.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.175 filename2: (groupid=0, jobs=1): err= 0: pid=2208902: Wed May 15 00:46:54 2024 00:31:29.175 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.6MiB/10022msec) 00:31:29.175 slat (usec): min=5, max=119, avg=33.47, stdev=22.62 00:31:29.175 clat (usec): min=19245, max=55942, avg=31789.94, stdev=1463.35 00:31:29.175 lat (usec): min=19258, max=55971, avg=31823.40, stdev=1459.85 00:31:29.175 clat percentiles (usec): 00:31:29.175 | 1.00th=[31065], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:31:29.175 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:31:29.175 | 70.00th=[31851], 80.00th=[31851], 90.00th=[32113], 95.00th=[32113], 00:31:29.175 | 99.00th=[32900], 99.50th=[35390], 99.90th=[55837], 99.95th=[55837], 00:31:29.175 | 99.99th=[55837] 00:31:29.175 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1996.80, stdev=76.58, samples=20 00:31:29.175 iops : min= 448, max= 512, avg=499.20, stdev=19.14, samples=20 00:31:29.175 lat (msec) : 20=0.04%, 50=99.64%, 100=0.32% 00:31:29.175 cpu : usr=99.08%, sys=0.53%, ctx=16, majf=0, minf=1635 00:31:29.175 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:29.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.175 filename2: (groupid=0, jobs=1): err= 0: pid=2208903: Wed May 15 00:46:54 2024 00:31:29.175 read: IOPS=500, BW=2001KiB/s (2049kB/s)(19.6MiB/10009msec) 00:31:29.175 slat (nsec): min=3474, max=60495, avg=14325.19, stdev=7333.67 00:31:29.175 clat (usec): min=17612, max=49905, avg=31862.88, stdev=2268.95 00:31:29.175 lat (usec): min=17621, max=49938, avg=31877.20, stdev=2268.74 00:31:29.175 clat percentiles (usec): 00:31:29.175 | 1.00th=[23462], 5.00th=[31589], 10.00th=[31589], 20.00th=[31589], 00:31:29.175 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:31:29.175 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32113], 95.00th=[32113], 00:31:29.175 | 99.00th=[45351], 99.50th=[45876], 99.90th=[50070], 99.95th=[50070], 00:31:29.175 | 99.99th=[50070] 00:31:29.175 bw ( KiB/s): min= 1920, max= 2064, per=4.16%, avg=1996.80, stdev=63.28, samples=20 00:31:29.175 iops : min= 480, max= 516, avg=499.20, stdev=15.82, samples=20 00:31:29.175 lat (msec) : 20=0.96%, 50=99.04% 00:31:29.175 cpu : usr=99.12%, sys=0.52%, ctx=17, majf=0, minf=1636 00:31:29.175 IO depths : 1=3.9%, 2=10.2%, 4=25.0%, 8=52.3%, 16=8.6%, 32=0.0%, >=64=0.0% 00:31:29.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.175 filename2: (groupid=0, jobs=1): err= 0: pid=2208904: Wed May 15 00:46:54 2024 00:31:29.175 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10014msec) 00:31:29.175 slat (usec): min=6, max=116, avg=38.79, stdev=18.41 00:31:29.175 clat (usec): min=17746, max=61630, avg=31622.47, stdev=1905.75 00:31:29.175 lat (usec): min=17772, max=61657, avg=31661.26, stdev=1905.16 00:31:29.175 clat percentiles (usec): 00:31:29.175 | 1.00th=[30802], 5.00th=[31065], 10.00th=[31065], 20.00th=[31327], 00:31:29.175 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31589], 00:31:29.175 | 70.00th=[31589], 80.00th=[31851], 90.00th=[31851], 95.00th=[32113], 00:31:29.175 | 99.00th=[32900], 99.50th=[34866], 99.90th=[61604], 99.95th=[61604], 00:31:29.175 | 99.99th=[61604] 00:31:29.175 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1996.95, stdev=76.15, samples=20 00:31:29.175 iops : min= 448, max= 512, avg=499.20, stdev=19.14, samples=20 00:31:29.175 lat (msec) : 20=0.32%, 50=99.36%, 100=0.32% 00:31:29.175 cpu : usr=98.96%, sys=0.66%, ctx=16, majf=0, minf=1636 00:31:29.175 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:29.175 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:29.175 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:29.175 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:29.175 00:31:29.175 Run status group 0 (all jobs): 00:31:29.175 READ: bw=46.9MiB/s (49.1MB/s), 1996KiB/s-2020KiB/s (2044kB/s-2068kB/s), io=470MiB (493MB), run=10002-10028msec 00:31:29.175 ----------------------------------------------------- 00:31:29.175 Suppressions used: 00:31:29.175 count bytes template 00:31:29.175 45 402 /usr/src/fio/parse.c 00:31:29.175 1 8 libtcmalloc_minimal.so 00:31:29.175 1 904 libcrypto.so 00:31:29.175 ----------------------------------------------------- 00:31:29.175 00:31:29.175 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:29.175 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:29.175 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:29.175 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:29.175 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:29.175 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:29.175 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.175 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.175 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.175 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:29.175 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.175 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.175 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.176 bdev_null0 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.176 [2024-05-15 00:46:54.981883] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.176 bdev_null1 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.176 00:46:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:29.176 { 00:31:29.176 "params": { 00:31:29.176 "name": "Nvme$subsystem", 00:31:29.176 "trtype": "$TEST_TRANSPORT", 00:31:29.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:29.176 "adrfam": "ipv4", 00:31:29.176 "trsvcid": "$NVMF_PORT", 00:31:29.176 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:29.176 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:29.176 "hdgst": ${hdgst:-false}, 00:31:29.176 "ddgst": ${ddgst:-false} 00:31:29.176 }, 00:31:29.176 "method": "bdev_nvme_attach_controller" 00:31:29.176 } 00:31:29.176 EOF 00:31:29.176 )") 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1336 -- # local sanitizers 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1338 -- # shift 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local asan_lib= 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # grep libasan 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:29.176 00:46:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:29.176 { 00:31:29.176 "params": { 00:31:29.176 "name": "Nvme$subsystem", 00:31:29.177 "trtype": "$TEST_TRANSPORT", 00:31:29.177 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:29.177 "adrfam": "ipv4", 00:31:29.177 "trsvcid": "$NVMF_PORT", 00:31:29.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:29.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:29.177 "hdgst": ${hdgst:-false}, 00:31:29.177 "ddgst": ${ddgst:-false} 00:31:29.177 }, 00:31:29.177 "method": "bdev_nvme_attach_controller" 00:31:29.177 } 00:31:29.177 EOF 00:31:29.177 )") 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:29.177 "params": { 00:31:29.177 "name": "Nvme0", 00:31:29.177 "trtype": "tcp", 00:31:29.177 "traddr": "10.0.0.2", 00:31:29.177 "adrfam": "ipv4", 00:31:29.177 "trsvcid": "4420", 00:31:29.177 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:29.177 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:29.177 "hdgst": false, 00:31:29.177 "ddgst": false 00:31:29.177 }, 00:31:29.177 "method": "bdev_nvme_attach_controller" 00:31:29.177 },{ 00:31:29.177 "params": { 00:31:29.177 "name": "Nvme1", 00:31:29.177 "trtype": "tcp", 00:31:29.177 "traddr": "10.0.0.2", 00:31:29.177 "adrfam": "ipv4", 00:31:29.177 "trsvcid": "4420", 00:31:29.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:29.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:29.177 "hdgst": false, 00:31:29.177 "ddgst": false 00:31:29.177 }, 00:31:29.177 "method": "bdev_nvme_attach_controller" 00:31:29.177 }' 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # break 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:29.177 00:46:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:29.435 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:29.435 ... 00:31:29.435 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:29.435 ... 00:31:29.435 fio-3.35 00:31:29.435 Starting 4 threads 00:31:29.435 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.994 00:31:35.994 filename0: (groupid=0, jobs=1): err= 0: pid=2211420: Wed May 15 00:47:01 2024 00:31:35.994 read: IOPS=2472, BW=19.3MiB/s (20.3MB/s)(96.6MiB/5003msec) 00:31:35.994 slat (nsec): min=3990, max=45120, avg=10182.89, stdev=5834.44 00:31:35.994 clat (usec): min=585, max=5869, avg=3201.42, stdev=366.96 00:31:35.994 lat (usec): min=592, max=5877, avg=3211.60, stdev=367.00 00:31:35.994 clat percentiles (usec): 00:31:35.994 | 1.00th=[ 2212], 5.00th=[ 2704], 10.00th=[ 2900], 20.00th=[ 3097], 00:31:35.994 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3228], 00:31:35.994 | 70.00th=[ 3261], 80.00th=[ 3294], 90.00th=[ 3392], 95.00th=[ 3687], 00:31:35.994 | 99.00th=[ 4752], 99.50th=[ 5145], 99.90th=[ 5669], 99.95th=[ 5800], 00:31:35.994 | 99.99th=[ 5866] 00:31:35.994 bw ( KiB/s): min=19088, max=20736, per=25.09%, avg=19780.80, stdev=443.60, samples=10 00:31:35.994 iops : min= 2386, max= 2592, avg=2472.60, stdev=55.45, samples=10 00:31:35.994 lat (usec) : 750=0.02%, 1000=0.05% 00:31:35.994 lat (msec) : 2=0.60%, 4=96.43%, 10=2.91% 00:31:35.994 cpu : usr=97.38%, sys=2.32%, ctx=6, majf=0, minf=1632 00:31:35.994 IO depths : 1=0.2%, 2=16.9%, 4=55.1%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.994 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.994 issued rwts: total=12371,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.994 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:35.994 filename0: (groupid=0, jobs=1): err= 0: pid=2211421: Wed May 15 00:47:01 2024 00:31:35.994 read: IOPS=2440, BW=19.1MiB/s (20.0MB/s)(95.3MiB/5001msec) 00:31:35.994 slat (nsec): min=4357, max=45599, avg=10368.69, stdev=5836.75 00:31:35.994 clat (usec): min=648, max=6697, avg=3242.87, stdev=446.07 00:31:35.994 lat (usec): min=655, max=6704, avg=3253.24, stdev=445.88 00:31:35.994 clat percentiles (usec): 00:31:35.994 | 1.00th=[ 2114], 5.00th=[ 2802], 10.00th=[ 2933], 20.00th=[ 3097], 00:31:35.994 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3228], 00:31:35.994 | 70.00th=[ 3261], 80.00th=[ 3326], 90.00th=[ 3523], 95.00th=[ 3916], 00:31:35.994 | 99.00th=[ 5276], 99.50th=[ 5538], 99.90th=[ 5866], 99.95th=[ 5932], 00:31:35.994 | 99.99th=[ 6194] 00:31:35.994 bw ( KiB/s): min=18912, max=20016, per=24.81%, avg=19557.33, stdev=375.74, samples=9 00:31:35.994 iops : min= 2364, max= 2502, avg=2444.67, stdev=46.97, samples=9 00:31:35.994 lat (usec) : 750=0.11%, 1000=0.13% 00:31:35.994 lat (msec) : 2=0.62%, 4=94.76%, 10=4.38% 00:31:35.994 cpu : usr=97.62%, sys=2.10%, ctx=6, majf=0, minf=1634 00:31:35.994 IO depths : 1=0.1%, 2=15.7%, 4=55.9%, 8=28.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.994 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.994 issued rwts: total=12204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.994 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:35.994 filename1: (groupid=0, jobs=1): err= 0: pid=2211422: Wed May 15 00:47:01 2024 00:31:35.994 read: IOPS=2445, BW=19.1MiB/s (20.0MB/s)(95.6MiB/5001msec) 00:31:35.994 slat (nsec): min=3674, max=46465, avg=10413.95, stdev=5896.27 00:31:35.994 clat (usec): min=632, max=6109, avg=3234.72, stdev=446.22 00:31:35.994 lat (usec): min=639, max=6121, avg=3245.13, stdev=446.09 00:31:35.994 clat percentiles (usec): 00:31:35.994 | 1.00th=[ 1942], 5.00th=[ 2802], 10.00th=[ 2966], 20.00th=[ 3097], 00:31:35.994 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3228], 00:31:35.994 | 70.00th=[ 3261], 80.00th=[ 3294], 90.00th=[ 3458], 95.00th=[ 3851], 00:31:35.994 | 99.00th=[ 5276], 99.50th=[ 5538], 99.90th=[ 5800], 99.95th=[ 5866], 00:31:35.994 | 99.99th=[ 5932] 00:31:35.994 bw ( KiB/s): min=19104, max=19824, per=24.88%, avg=19616.00, stdev=258.74, samples=9 00:31:35.994 iops : min= 2388, max= 2478, avg=2452.00, stdev=32.34, samples=9 00:31:35.994 lat (usec) : 750=0.14%, 1000=0.13% 00:31:35.994 lat (msec) : 2=0.76%, 4=94.91%, 10=4.06% 00:31:35.994 cpu : usr=97.60%, sys=2.10%, ctx=6, majf=0, minf=1638 00:31:35.994 IO depths : 1=0.1%, 2=17.5%, 4=55.1%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.994 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.994 issued rwts: total=12232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.994 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:35.994 filename1: (groupid=0, jobs=1): err= 0: pid=2211423: Wed May 15 00:47:01 2024 00:31:35.994 read: IOPS=2497, BW=19.5MiB/s (20.5MB/s)(97.6MiB/5003msec) 00:31:35.994 slat (nsec): min=3370, max=44107, avg=9776.90, stdev=5567.40 00:31:35.994 clat (usec): min=658, max=5820, avg=3173.83, stdev=295.18 00:31:35.994 lat (usec): min=665, max=5828, avg=3183.60, stdev=295.47 00:31:35.994 clat percentiles (usec): 00:31:35.995 | 1.00th=[ 2311], 5.00th=[ 2704], 10.00th=[ 2900], 20.00th=[ 3097], 00:31:35.995 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3195], 60.00th=[ 3228], 00:31:35.995 | 70.00th=[ 3261], 80.00th=[ 3294], 90.00th=[ 3359], 95.00th=[ 3523], 00:31:35.995 | 99.00th=[ 4113], 99.50th=[ 4490], 99.90th=[ 5473], 99.95th=[ 5735], 00:31:35.995 | 99.99th=[ 5800] 00:31:35.995 bw ( KiB/s): min=19568, max=20800, per=25.35%, avg=19982.40, stdev=420.45, samples=10 00:31:35.995 iops : min= 2446, max= 2600, avg=2497.80, stdev=52.56, samples=10 00:31:35.995 lat (usec) : 750=0.01%, 1000=0.02% 00:31:35.995 lat (msec) : 2=0.50%, 4=98.26%, 10=1.21% 00:31:35.995 cpu : usr=97.40%, sys=2.30%, ctx=6, majf=0, minf=1637 00:31:35.995 IO depths : 1=0.2%, 2=9.2%, 4=63.3%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:35.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.995 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:35.995 issued rwts: total=12497,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:35.995 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:35.995 00:31:35.995 Run status group 0 (all jobs): 00:31:35.995 READ: bw=77.0MiB/s (80.7MB/s), 19.1MiB/s-19.5MiB/s (20.0MB/s-20.5MB/s), io=385MiB (404MB), run=5001-5003msec 00:31:35.995 ----------------------------------------------------- 00:31:35.995 Suppressions used: 00:31:35.995 count bytes template 00:31:35.995 6 52 /usr/src/fio/parse.c 00:31:35.995 1 8 libtcmalloc_minimal.so 00:31:35.995 1 904 libcrypto.so 00:31:35.995 ----------------------------------------------------- 00:31:35.995 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:35.995 00:31:35.995 real 0m26.109s 00:31:35.995 user 5m14.140s 00:31:35.995 sys 0m3.902s 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:35.995 00:47:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:35.995 ************************************ 00:31:35.995 END TEST fio_dif_rand_params 00:31:35.995 ************************************ 00:31:35.995 00:47:01 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:35.995 00:47:01 nvmf_dif -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:31:35.995 00:47:01 nvmf_dif -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:35.995 00:47:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:35.995 ************************************ 00:31:35.995 START TEST fio_dif_digest 00:31:35.995 ************************************ 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1122 -- # fio_dif_digest 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:35.995 bdev_null0 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:35.995 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:35.996 [2024-05-15 00:47:01.766492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1353 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1334 -- # local fio_dir=/usr/src/fio 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1336 -- # local sanitizers 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1338 -- # shift 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local asan_lib= 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # for sanitizer in "${sanitizers[@]}" 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:35.996 { 00:31:35.996 "params": { 00:31:35.996 "name": "Nvme$subsystem", 00:31:35.996 "trtype": "$TEST_TRANSPORT", 00:31:35.996 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:35.996 "adrfam": "ipv4", 00:31:35.996 "trsvcid": "$NVMF_PORT", 00:31:35.996 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:35.996 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:35.996 "hdgst": ${hdgst:-false}, 00:31:35.996 "ddgst": ${ddgst:-false} 00:31:35.996 }, 00:31:35.996 "method": "bdev_nvme_attach_controller" 00:31:35.996 } 00:31:35.996 EOF 00:31:35.996 )") 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # grep libasan 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # awk '{print $3}' 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:35.996 "params": { 00:31:35.996 "name": "Nvme0", 00:31:35.996 "trtype": "tcp", 00:31:35.996 "traddr": "10.0.0.2", 00:31:35.996 "adrfam": "ipv4", 00:31:35.996 "trsvcid": "4420", 00:31:35.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.996 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:35.996 "hdgst": true, 00:31:35.996 "ddgst": true 00:31:35.996 }, 00:31:35.996 "method": "bdev_nvme_attach_controller" 00:31:35.996 }' 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # asan_lib=/usr/lib64/libasan.so.8 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # break 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:35.996 00:47:01 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:36.254 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:36.254 ... 00:31:36.254 fio-3.35 00:31:36.254 Starting 3 threads 00:31:36.254 EAL: No free 2048 kB hugepages reported on node 1 00:31:48.537 00:31:48.537 filename0: (groupid=0, jobs=1): err= 0: pid=2212787: Wed May 15 00:47:12 2024 00:31:48.537 read: IOPS=280, BW=35.0MiB/s (36.7MB/s)(352MiB/10046msec) 00:31:48.537 slat (nsec): min=4207, max=19991, avg=8476.95, stdev=1211.44 00:31:48.537 clat (usec): min=7971, max=53571, avg=10680.42, stdev=1263.59 00:31:48.537 lat (usec): min=7979, max=53580, avg=10688.89, stdev=1263.60 00:31:48.537 clat percentiles (usec): 00:31:48.537 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:31:48.537 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:31:48.537 | 70.00th=[11076], 80.00th=[11207], 90.00th=[11469], 95.00th=[11731], 00:31:48.537 | 99.00th=[12518], 99.50th=[12518], 99.90th=[13435], 99.95th=[46924], 00:31:48.537 | 99.99th=[53740] 00:31:48.537 bw ( KiB/s): min=35328, max=36608, per=35.40%, avg=36006.40, stdev=400.70, samples=20 00:31:48.537 iops : min= 276, max= 286, avg=281.30, stdev= 3.13, samples=20 00:31:48.537 lat (msec) : 10=16.80%, 20=83.13%, 50=0.04%, 100=0.04% 00:31:48.537 cpu : usr=96.89%, sys=2.82%, ctx=14, majf=0, minf=1637 00:31:48.537 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.537 issued rwts: total=2815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.537 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:48.537 filename0: (groupid=0, jobs=1): err= 0: pid=2212788: Wed May 15 00:47:12 2024 00:31:48.537 read: IOPS=254, BW=31.9MiB/s (33.4MB/s)(320MiB/10043msec) 00:31:48.537 slat (nsec): min=4567, max=24509, avg=8629.26, stdev=1294.06 00:31:48.537 clat (usec): min=9077, max=51542, avg=11742.93, stdev=1308.84 00:31:48.537 lat (usec): min=9086, max=51551, avg=11751.56, stdev=1308.90 00:31:48.537 clat percentiles (usec): 00:31:48.537 | 1.00th=[ 9896], 5.00th=[10552], 10.00th=[10814], 20.00th=[11076], 00:31:48.537 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11731], 60.00th=[11863], 00:31:48.537 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12649], 95.00th=[13042], 00:31:48.537 | 99.00th=[13960], 99.50th=[14222], 99.90th=[17433], 99.95th=[46400], 00:31:48.537 | 99.99th=[51643] 00:31:48.537 bw ( KiB/s): min=32256, max=33792, per=32.19%, avg=32742.40, stdev=414.46, samples=20 00:31:48.537 iops : min= 252, max= 264, avg=255.80, stdev= 3.24, samples=20 00:31:48.537 lat (msec) : 10=1.45%, 20=98.48%, 50=0.04%, 100=0.04% 00:31:48.537 cpu : usr=96.86%, sys=2.76%, ctx=164, majf=0, minf=1636 00:31:48.537 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.537 issued rwts: total=2560,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.537 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:48.537 filename0: (groupid=0, jobs=1): err= 0: pid=2212789: Wed May 15 00:47:12 2024 00:31:48.537 read: IOPS=259, BW=32.4MiB/s (34.0MB/s)(326MiB/10045msec) 00:31:48.537 slat (nsec): min=4394, max=24598, avg=8482.99, stdev=1188.74 00:31:48.537 clat (usec): min=8661, max=48489, avg=11533.71, stdev=1243.87 00:31:48.537 lat (usec): min=8671, max=48497, avg=11542.20, stdev=1243.86 00:31:48.537 clat percentiles (usec): 00:31:48.537 | 1.00th=[ 9765], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:31:48.537 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:31:48.537 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12780], 00:31:48.537 | 99.00th=[13435], 99.50th=[13698], 99.90th=[14484], 99.95th=[45876], 00:31:48.537 | 99.99th=[48497] 00:31:48.537 bw ( KiB/s): min=32768, max=34048, per=32.78%, avg=33334.50, stdev=338.20, samples=20 00:31:48.537 iops : min= 256, max= 266, avg=260.40, stdev= 2.64, samples=20 00:31:48.537 lat (msec) : 10=1.84%, 20=98.08%, 50=0.08% 00:31:48.537 cpu : usr=97.08%, sys=2.63%, ctx=13, majf=0, minf=1632 00:31:48.537 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:48.537 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.537 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:48.537 issued rwts: total=2607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:48.537 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:48.537 00:31:48.537 Run status group 0 (all jobs): 00:31:48.537 READ: bw=99.3MiB/s (104MB/s), 31.9MiB/s-35.0MiB/s (33.4MB/s-36.7MB/s), io=998MiB (1046MB), run=10043-10046msec 00:31:48.537 ----------------------------------------------------- 00:31:48.537 Suppressions used: 00:31:48.537 count bytes template 00:31:48.537 5 44 /usr/src/fio/parse.c 00:31:48.537 1 8 libtcmalloc_minimal.so 00:31:48.537 1 904 libcrypto.so 00:31:48.537 ----------------------------------------------------- 00:31:48.537 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@560 -- # xtrace_disable 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:31:48.537 00:31:48.537 real 0m11.610s 00:31:48.537 user 0m45.413s 00:31:48.537 sys 0m1.195s 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:48.537 00:47:13 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:48.537 ************************************ 00:31:48.537 END TEST fio_dif_digest 00:31:48.537 ************************************ 00:31:48.537 00:47:13 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:48.537 00:47:13 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:31:48.537 00:47:13 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:48.537 00:47:13 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:31:48.537 00:47:13 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:48.537 00:47:13 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:31:48.537 00:47:13 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:48.537 00:47:13 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:48.537 rmmod nvme_tcp 00:31:48.537 rmmod nvme_fabrics 00:31:48.537 rmmod nvme_keyring 00:31:48.537 00:47:13 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:48.537 00:47:13 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:31:48.537 00:47:13 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:31:48.537 00:47:13 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2201715 ']' 00:31:48.537 00:47:13 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2201715 00:31:48.537 00:47:13 nvmf_dif -- common/autotest_common.sh@947 -- # '[' -z 2201715 ']' 00:31:48.537 00:47:13 nvmf_dif -- common/autotest_common.sh@951 -- # kill -0 2201715 00:31:48.537 00:47:13 nvmf_dif -- common/autotest_common.sh@952 -- # uname 00:31:48.537 00:47:13 nvmf_dif -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:31:48.537 00:47:13 nvmf_dif -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2201715 00:31:48.537 00:47:13 nvmf_dif -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:31:48.537 00:47:13 nvmf_dif -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:31:48.537 00:47:13 nvmf_dif -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2201715' 00:31:48.537 killing process with pid 2201715 00:31:48.537 00:47:13 nvmf_dif -- common/autotest_common.sh@966 -- # kill 2201715 00:31:48.537 [2024-05-15 00:47:13.494075] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]addres 00:47:13 nvmf_dif -- common/autotest_common.sh@971 -- # wait 2201715 00:31:48.537 s.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:31:48.537 00:47:13 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:48.537 00:47:13 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:31:50.444 Waiting for block devices as requested 00:31:50.444 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:31:50.444 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:31:50.444 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:31:50.702 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:31:50.702 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:31:50.702 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:31:50.963 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:31:50.963 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:31:51.222 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:31:51.222 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:31:51.481 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:31:51.481 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:31:51.481 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:31:51.741 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:31:51.741 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:31:52.001 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:31:52.001 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:31:52.258 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:31:52.517 00:47:18 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:52.517 00:47:18 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:52.517 00:47:18 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:52.517 00:47:18 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:52.517 00:47:18 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:52.517 00:47:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:52.517 00:47:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.077 00:47:20 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:55.077 00:31:55.077 real 1m17.838s 00:31:55.077 user 8m4.098s 00:31:55.077 sys 0m16.294s 00:31:55.077 00:47:20 nvmf_dif -- common/autotest_common.sh@1123 -- # xtrace_disable 00:31:55.077 00:47:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:55.077 ************************************ 00:31:55.077 END TEST nvmf_dif 00:31:55.077 ************************************ 00:31:55.077 00:47:20 -- spdk/autotest.sh@289 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:55.077 00:47:20 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:31:55.077 00:47:20 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:31:55.077 00:47:20 -- common/autotest_common.sh@10 -- # set +x 00:31:55.077 ************************************ 00:31:55.077 START TEST nvmf_abort_qd_sizes 00:31:55.077 ************************************ 00:31:55.077 00:47:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:31:55.077 * Looking for test storage... 00:31:55.077 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy-fallback != virt ]] 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:31:55.078 00:47:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:32:00.348 Found 0000:27:00.0 (0x8086 - 0x159b) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:32:00.348 Found 0000:27:00.1 (0x8086 - 0x159b) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:32:00.348 Found net devices under 0000:27:00.0: cvl_0_0 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:32:00.348 Found net devices under 0000:27:00.1: cvl_0_1 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:00.348 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:00.349 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.349 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:00.349 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:00.349 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:00.349 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:00.349 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:00.349 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:00.349 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:00.349 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:00.609 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:00.609 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:00.609 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:00.609 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.609 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:32:00.609 00:32:00.609 --- 10.0.0.2 ping statistics --- 00:32:00.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.609 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:32:00.609 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:00.609 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.609 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.068 ms 00:32:00.609 00:32:00.609 --- 10.0.0.1 ping statistics --- 00:32:00.609 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.609 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:32:00.609 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.609 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:00.609 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:00.609 00:47:26 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:32:03.904 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:03.904 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:03.904 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:03.904 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:32:03.904 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:03.904 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:32:03.904 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:03.904 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:32:03.904 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:03.904 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:32:03.904 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:32:03.904 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:32:03.904 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:03.904 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:32:03.904 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:03.904 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:32:05.814 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:32:05.814 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@721 -- # xtrace_disable 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2222516 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2222516 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@828 -- # '[' -z 2222516 ']' 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:06.385 00:47:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:06.385 [2024-05-15 00:47:32.479574] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:32:06.385 [2024-05-15 00:47:32.479685] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.647 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.647 [2024-05-15 00:47:32.603869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:06.647 [2024-05-15 00:47:32.704785] app.c: 604:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.647 [2024-05-15 00:47:32.704824] app.c: 605:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.647 [2024-05-15 00:47:32.704834] app.c: 610:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.647 [2024-05-15 00:47:32.704844] app.c: 611:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.647 [2024-05-15 00:47:32.704852] app.c: 612:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.647 [2024-05-15 00:47:32.705053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.647 [2024-05-15 00:47:32.705139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:06.647 [2024-05-15 00:47:32.705239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.647 [2024-05-15 00:47:32.705249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@861 -- # return 0 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@727 -- # xtrace_disable 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:c9:00.0 0000:ca:00.0 ]] 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:c9:00.0 ]] 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:ca:00.0 ]] 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 2 )) 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:c9:00.0 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:32:07.217 00:47:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:07.217 ************************************ 00:32:07.217 START TEST spdk_target_abort 00:32:07.217 ************************************ 00:32:07.217 00:47:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1122 -- # spdk_target 00:32:07.217 00:47:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:07.218 00:47:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:c9:00.0 -b spdk_target 00:32:07.218 00:47:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:07.218 00:47:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:10.506 spdk_targetn1 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:10.506 [2024-05-15 00:47:36.141496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:10.506 [2024-05-15 00:47:36.175889] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:10.506 [2024-05-15 00:47:36.176242] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:10.506 00:47:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:10.506 EAL: No free 2048 kB hugepages reported on node 1 00:32:13.787 Initializing NVMe Controllers 00:32:13.787 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:13.787 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:13.787 Initialization complete. Launching workers. 00:32:13.787 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17865, failed: 0 00:32:13.787 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1632, failed to submit 16233 00:32:13.787 success 743, unsuccess 889, failed 0 00:32:13.787 00:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:13.787 00:47:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:13.787 EAL: No free 2048 kB hugepages reported on node 1 00:32:17.073 Initializing NVMe Controllers 00:32:17.073 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:17.073 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:17.073 Initialization complete. Launching workers. 00:32:17.073 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8563, failed: 0 00:32:17.073 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1240, failed to submit 7323 00:32:17.073 success 349, unsuccess 891, failed 0 00:32:17.073 00:47:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:17.073 00:47:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:17.073 EAL: No free 2048 kB hugepages reported on node 1 00:32:20.359 Initializing NVMe Controllers 00:32:20.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:20.359 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:20.359 Initialization complete. Launching workers. 00:32:20.359 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39667, failed: 0 00:32:20.359 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2624, failed to submit 37043 00:32:20.359 success 594, unsuccess 2030, failed 0 00:32:20.359 00:47:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:20.359 00:47:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.359 00:47:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:20.359 00:47:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:20.359 00:47:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:20.359 00:47:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:20.359 00:47:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:22.263 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:22.263 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2222516 00:32:22.263 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@947 -- # '[' -z 2222516 ']' 00:32:22.263 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@951 -- # kill -0 2222516 00:32:22.263 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # uname 00:32:22.263 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:22.263 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2222516 00:32:22.263 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:32:22.263 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:32:22.263 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2222516' 00:32:22.263 killing process with pid 2222516 00:32:22.263 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # kill 2222516 00:32:22.263 [2024-05-15 00:47:48.263933] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:32:22.263 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # wait 2222516 00:32:22.520 00:32:22.520 real 0m15.373s 00:32:22.520 user 1m1.548s 00:32:22.520 sys 0m1.290s 00:32:22.520 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:32:22.520 00:47:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:22.520 ************************************ 00:32:22.520 END TEST spdk_target_abort 00:32:22.520 ************************************ 00:32:22.779 00:47:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:22.779 00:47:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:32:22.779 00:47:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1104 -- # xtrace_disable 00:32:22.779 00:47:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:22.779 ************************************ 00:32:22.779 START TEST kernel_target_abort 00:32:22.779 ************************************ 00:32:22.779 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1122 -- # kernel_target 00:32:22.779 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:22.779 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:22.779 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:22.780 00:47:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:32:25.386 Waiting for block devices as requested 00:32:25.386 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:32:25.386 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:25.646 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:25.646 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:25.906 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:32:25.906 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:25.906 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:32:26.164 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:26.164 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:32:26.422 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:26.422 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:32:26.422 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:32:26.679 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:32:26.679 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:32:26.937 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:26.937 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:32:27.195 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:27.195 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme0n1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:28.591 No valid GPT data, bailing 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme1n1 ]] 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme1n1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1659 -- # local device=nvme1n1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1661 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # [[ none != none ]] 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme1n1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:32:28.591 No valid GPT data, bailing 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme1n1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme1n1 ]] 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme1n1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda --hostid=80ef6226-405e-ee11-906e-a4bf01973fda -a 10.0.0.1 -t tcp -s 4420 00:32:28.591 00:32:28.591 Discovery Log Number of Records 2, Generation counter 2 00:32:28.591 =====Discovery Log Entry 0====== 00:32:28.591 trtype: tcp 00:32:28.591 adrfam: ipv4 00:32:28.591 subtype: current discovery subsystem 00:32:28.591 treq: not specified, sq flow control disable supported 00:32:28.591 portid: 1 00:32:28.591 trsvcid: 4420 00:32:28.591 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:28.591 traddr: 10.0.0.1 00:32:28.591 eflags: none 00:32:28.591 sectype: none 00:32:28.591 =====Discovery Log Entry 1====== 00:32:28.591 trtype: tcp 00:32:28.591 adrfam: ipv4 00:32:28.591 subtype: nvme subsystem 00:32:28.591 treq: not specified, sq flow control disable supported 00:32:28.591 portid: 1 00:32:28.591 trsvcid: 4420 00:32:28.591 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:28.591 traddr: 10.0.0.1 00:32:28.591 eflags: none 00:32:28.591 sectype: none 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:28.591 00:47:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:28.591 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.875 Initializing NVMe Controllers 00:32:31.875 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:31.875 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:31.875 Initialization complete. Launching workers. 00:32:31.875 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 88708, failed: 0 00:32:31.875 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 88708, failed to submit 0 00:32:31.875 success 0, unsuccess 88708, failed 0 00:32:31.875 00:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:31.875 00:47:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:31.875 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.160 Initializing NVMe Controllers 00:32:35.160 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:35.160 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:35.160 Initialization complete. Launching workers. 00:32:35.160 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 138108, failed: 0 00:32:35.160 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34478, failed to submit 103630 00:32:35.160 success 0, unsuccess 34478, failed 0 00:32:35.160 00:48:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:35.160 00:48:00 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:35.160 EAL: No free 2048 kB hugepages reported on node 1 00:32:38.448 Initializing NVMe Controllers 00:32:38.448 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:38.448 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:38.448 Initialization complete. Launching workers. 00:32:38.448 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 133558, failed: 0 00:32:38.448 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33442, failed to submit 100116 00:32:38.448 success 0, unsuccess 33442, failed 0 00:32:38.448 00:48:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:38.448 00:48:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:38.448 00:48:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:38.448 00:48:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:38.448 00:48:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:38.448 00:48:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:38.448 00:48:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:38.448 00:48:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:38.448 00:48:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:38.448 00:48:04 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:32:41.108 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:41.108 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:41.108 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:41.108 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:32:41.108 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:41.108 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:32:41.108 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:41.108 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:32:41.108 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:41.108 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:32:41.108 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:32:41.108 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:32:41.108 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:41.108 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:32:41.108 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:32:41.108 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:32:43.010 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:32:43.010 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:32:43.577 00:32:43.577 real 0m20.760s 00:32:43.577 user 0m9.338s 00:32:43.577 sys 0m5.676s 00:32:43.577 00:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # xtrace_disable 00:32:43.577 00:48:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:43.577 ************************************ 00:32:43.577 END TEST kernel_target_abort 00:32:43.577 ************************************ 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:43.577 rmmod nvme_tcp 00:32:43.577 rmmod nvme_fabrics 00:32:43.577 rmmod nvme_keyring 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2222516 ']' 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2222516 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@947 -- # '[' -z 2222516 ']' 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@951 -- # kill -0 2222516 00:32:43.577 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 951: kill: (2222516) - No such process 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@974 -- # echo 'Process with pid 2222516 is not found' 00:32:43.577 Process with pid 2222516 is not found 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:43.577 00:48:09 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:32:46.105 Waiting for block devices as requested 00:32:46.105 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:32:46.105 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:46.362 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:46.362 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:46.619 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:32:46.619 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:46.619 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:32:46.876 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:46.876 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:32:47.135 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:47.135 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:32:47.135 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:32:47.395 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:32:47.395 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:32:47.652 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:47.652 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:32:47.912 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:32:47.912 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:32:48.477 00:48:14 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:48.477 00:48:14 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:48.477 00:48:14 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:48.477 00:48:14 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:48.477 00:48:14 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.477 00:48:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:48.477 00:48:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.378 00:48:16 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:50.378 00:32:50.378 real 0m55.724s 00:32:50.378 user 1m14.962s 00:32:50.378 sys 0m15.783s 00:32:50.378 00:48:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # xtrace_disable 00:32:50.378 00:48:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:50.378 ************************************ 00:32:50.378 END TEST nvmf_abort_qd_sizes 00:32:50.378 ************************************ 00:32:50.378 00:48:16 -- spdk/autotest.sh@291 -- # run_test keyring_file /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/file.sh 00:32:50.378 00:48:16 -- common/autotest_common.sh@1098 -- # '[' 2 -le 1 ']' 00:32:50.378 00:48:16 -- common/autotest_common.sh@1104 -- # xtrace_disable 00:32:50.378 00:48:16 -- common/autotest_common.sh@10 -- # set +x 00:32:50.637 ************************************ 00:32:50.637 START TEST keyring_file 00:32:50.637 ************************************ 00:32:50.637 00:48:16 keyring_file -- common/autotest_common.sh@1122 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/file.sh 00:32:50.637 * Looking for test storage... 00:32:50.637 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring 00:32:50.637 00:48:16 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/common.sh 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80ef6226-405e-ee11-906e-a4bf01973fda 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80ef6226-405e-ee11-906e-a4bf01973fda 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:32:50.637 00:48:16 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.637 00:48:16 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.637 00:48:16 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.637 00:48:16 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.637 00:48:16 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.637 00:48:16 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.637 00:48:16 keyring_file -- paths/export.sh@5 -- # export PATH 00:32:50.637 00:48:16 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@47 -- # : 0 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:32:50.637 00:48:16 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:32:50.637 00:48:16 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:32:50.637 00:48:16 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:32:50.637 00:48:16 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:32:50.637 00:48:16 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:32:50.637 00:48:16 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mD3g1c5r6b 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mD3g1c5r6b 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mD3g1c5r6b 00:32:50.637 00:48:16 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.mD3g1c5r6b 00:32:50.637 00:48:16 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@17 -- # name=key1 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.InnPciqiEb 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:50.637 00:48:16 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.InnPciqiEb 00:32:50.637 00:48:16 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.InnPciqiEb 00:32:50.637 00:48:16 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.InnPciqiEb 00:32:50.637 00:48:16 keyring_file -- keyring/file.sh@30 -- # tgtpid=2234466 00:32:50.637 00:48:16 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2234466 00:32:50.637 00:48:16 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2234466 ']' 00:32:50.637 00:48:16 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:32:50.637 00:48:16 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:50.637 00:48:16 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:50.637 00:48:16 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:50.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:50.637 00:48:16 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:50.637 00:48:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:50.896 [2024-05-15 00:48:16.811972] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:32:50.896 [2024-05-15 00:48:16.812087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234466 ] 00:32:50.896 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.896 [2024-05-15 00:48:16.925386] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.896 [2024-05-15 00:48:17.022849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:32:51.461 00:48:17 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:51.461 [2024-05-15 00:48:17.498544] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.461 null0 00:32:51.461 [2024-05-15 00:48:17.530487] nvmf_rpc.c: 615:decode_rpc_listen_address: *WARNING*: decode_rpc_listen_address: deprecated feature [listen_]address.transport is deprecated in favor of trtype to be removed in v24.09 00:32:51.461 [2024-05-15 00:48:17.530563] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:32:51.461 [2024-05-15 00:48:17.530739] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:32:51.461 [2024-05-15 00:48:17.538537] tcp.c:3665:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@588 -- # [[ 0 == 0 ]] 00:32:51.461 00:48:17 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@637 -- # local arg=rpc_cmd 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@641 -- # type -t rpc_cmd 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@652 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@560 -- # xtrace_disable 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:51.461 [2024-05-15 00:48:17.550525] nvmf_rpc.c: 773:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:32:51.461 request: 00:32:51.461 { 00:32:51.461 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:32:51.461 "secure_channel": false, 00:32:51.461 "listen_address": { 00:32:51.461 "trtype": "tcp", 00:32:51.461 "traddr": "127.0.0.1", 00:32:51.461 "trsvcid": "4420" 00:32:51.461 }, 00:32:51.461 "method": "nvmf_subsystem_add_listener", 00:32:51.461 "req_id": 1 00:32:51.461 } 00:32:51.461 Got JSON-RPC error response 00:32:51.461 response: 00:32:51.461 { 00:32:51.461 "code": -32602, 00:32:51.461 "message": "Invalid parameters" 00:32:51.461 } 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@588 -- # [[ 1 == 0 ]] 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:51.461 00:48:17 keyring_file -- keyring/file.sh@46 -- # bperfpid=2234764 00:32:51.461 00:48:17 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2234764 /var/tmp/bperf.sock 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2234764 ']' 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:51.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:51.461 00:48:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:51.461 00:48:17 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:32:51.720 [2024-05-15 00:48:17.624338] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:32:51.720 [2024-05-15 00:48:17.624443] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234764 ] 00:32:51.720 EAL: No free 2048 kB hugepages reported on node 1 00:32:51.720 [2024-05-15 00:48:17.756373] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.979 [2024-05-15 00:48:17.896256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:52.236 00:48:18 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:52.236 00:48:18 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:32:52.236 00:48:18 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mD3g1c5r6b 00:32:52.236 00:48:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mD3g1c5r6b 00:32:52.496 00:48:18 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.InnPciqiEb 00:32:52.496 00:48:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.InnPciqiEb 00:32:52.496 00:48:18 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:32:52.496 00:48:18 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:32:52.496 00:48:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:52.496 00:48:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:52.496 00:48:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:52.754 00:48:18 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.mD3g1c5r6b == \/\t\m\p\/\t\m\p\.\m\D\3\g\1\c\5\r\6\b ]] 00:32:52.754 00:48:18 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:32:52.754 00:48:18 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:32:52.754 00:48:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:52.754 00:48:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:52.754 00:48:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:52.754 00:48:18 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.InnPciqiEb == \/\t\m\p\/\t\m\p\.\I\n\n\P\c\i\q\i\E\b ]] 00:32:52.754 00:48:18 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:32:52.754 00:48:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:52.754 00:48:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:52.754 00:48:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:52.754 00:48:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:52.754 00:48:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.012 00:48:19 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:32:53.012 00:48:19 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:32:53.012 00:48:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:53.012 00:48:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:53.012 00:48:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:53.012 00:48:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:53.012 00:48:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.270 00:48:19 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:32:53.270 00:48:19 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:53.270 00:48:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:53.270 [2024-05-15 00:48:19.317473] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:53.270 nvme0n1 00:32:53.270 00:48:19 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:32:53.270 00:48:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:53.270 00:48:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:53.270 00:48:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:53.270 00:48:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:53.270 00:48:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.527 00:48:19 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:32:53.527 00:48:19 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:32:53.527 00:48:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:53.527 00:48:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:53.527 00:48:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:53.527 00:48:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:53.527 00:48:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:53.785 00:48:19 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:32:53.785 00:48:19 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:32:53.785 Running I/O for 1 seconds... 00:32:54.721 00:32:54.721 Latency(us) 00:32:54.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.721 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:32:54.721 nvme0n1 : 1.00 18516.23 72.33 0.00 0.00 6896.53 3725.20 16211.54 00:32:54.721 =================================================================================================================== 00:32:54.722 Total : 18516.23 72.33 0.00 0.00 6896.53 3725.20 16211.54 00:32:54.722 0 00:32:54.722 00:48:20 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:54.722 00:48:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:54.980 00:48:20 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:32:54.980 00:48:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:54.980 00:48:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:54.980 00:48:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:54.980 00:48:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:54.980 00:48:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:54.980 00:48:21 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:32:54.980 00:48:21 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:32:54.980 00:48:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:54.980 00:48:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:54.980 00:48:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:54.980 00:48:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:54.980 00:48:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.237 00:48:21 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:32:55.237 00:48:21 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:55.237 00:48:21 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:55.237 00:48:21 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:55.237 00:48:21 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:55.237 00:48:21 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:55.237 00:48:21 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:55.237 00:48:21 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:55.237 00:48:21 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:55.237 00:48:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:32:55.237 [2024-05-15 00:48:21.323638] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:32:55.237 [2024-05-15 00:48:21.323981] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a6180 (107): Transport endpoint is not connected 00:32:55.237 [2024-05-15 00:48:21.324960] nvme_tcp.c:2176:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6150003a6180 (9): Bad file descriptor 00:32:55.237 [2024-05-15 00:48:21.325956] nvme_ctrlr.c:4041:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:55.237 [2024-05-15 00:48:21.325972] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:32:55.237 [2024-05-15 00:48:21.325982] nvme_ctrlr.c:1042:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:55.237 request: 00:32:55.237 { 00:32:55.237 "name": "nvme0", 00:32:55.237 "trtype": "tcp", 00:32:55.237 "traddr": "127.0.0.1", 00:32:55.237 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:55.237 "adrfam": "ipv4", 00:32:55.238 "trsvcid": "4420", 00:32:55.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:55.238 "psk": "key1", 00:32:55.238 "method": "bdev_nvme_attach_controller", 00:32:55.238 "req_id": 1 00:32:55.238 } 00:32:55.238 Got JSON-RPC error response 00:32:55.238 response: 00:32:55.238 { 00:32:55.238 "code": -32602, 00:32:55.238 "message": "Invalid parameters" 00:32:55.238 } 00:32:55.238 00:48:21 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:55.238 00:48:21 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:55.238 00:48:21 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:55.238 00:48:21 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:55.238 00:48:21 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:32:55.238 00:48:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:55.238 00:48:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:55.238 00:48:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.238 00:48:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:55.238 00:48:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.495 00:48:21 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:32:55.495 00:48:21 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:32:55.495 00:48:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:55.495 00:48:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:32:55.495 00:48:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:55.495 00:48:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:55.495 00:48:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:32:55.495 00:48:21 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:32:55.495 00:48:21 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:32:55.495 00:48:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:55.753 00:48:21 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:32:55.753 00:48:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:32:55.753 00:48:21 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:32:55.753 00:48:21 keyring_file -- keyring/file.sh@77 -- # jq length 00:32:55.753 00:48:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:56.012 00:48:22 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:32:56.012 00:48:22 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.mD3g1c5r6b 00:32:56.012 00:48:22 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.mD3g1c5r6b 00:32:56.012 00:48:22 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:56.012 00:48:22 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.mD3g1c5r6b 00:32:56.012 00:48:22 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:56.012 00:48:22 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:56.012 00:48:22 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:56.012 00:48:22 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:56.012 00:48:22 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mD3g1c5r6b 00:32:56.012 00:48:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mD3g1c5r6b 00:32:56.012 [2024-05-15 00:48:22.149037] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.mD3g1c5r6b': 0100660 00:32:56.012 [2024-05-15 00:48:22.149073] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:32:56.012 request: 00:32:56.012 { 00:32:56.012 "name": "key0", 00:32:56.012 "path": "/tmp/tmp.mD3g1c5r6b", 00:32:56.012 "method": "keyring_file_add_key", 00:32:56.012 "req_id": 1 00:32:56.012 } 00:32:56.012 Got JSON-RPC error response 00:32:56.012 response: 00:32:56.012 { 00:32:56.012 "code": -1, 00:32:56.012 "message": "Operation not permitted" 00:32:56.012 } 00:32:56.012 00:48:22 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:56.012 00:48:22 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:56.012 00:48:22 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:56.012 00:48:22 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:56.012 00:48:22 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.mD3g1c5r6b 00:32:56.012 00:48:22 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.mD3g1c5r6b 00:32:56.012 00:48:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.mD3g1c5r6b 00:32:56.272 00:48:22 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.mD3g1c5r6b 00:32:56.272 00:48:22 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:32:56.272 00:48:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:56.272 00:48:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:56.272 00:48:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:56.272 00:48:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:56.272 00:48:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:56.532 00:48:22 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:32:56.532 00:48:22 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:56.532 00:48:22 keyring_file -- common/autotest_common.sh@649 -- # local es=0 00:32:56.532 00:48:22 keyring_file -- common/autotest_common.sh@651 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:56.532 00:48:22 keyring_file -- common/autotest_common.sh@637 -- # local arg=bperf_cmd 00:32:56.532 00:48:22 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:56.532 00:48:22 keyring_file -- common/autotest_common.sh@641 -- # type -t bperf_cmd 00:32:56.532 00:48:22 keyring_file -- common/autotest_common.sh@641 -- # case "$(type -t "$arg")" in 00:32:56.532 00:48:22 keyring_file -- common/autotest_common.sh@652 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:56.532 00:48:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:56.532 [2024-05-15 00:48:22.585170] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.mD3g1c5r6b': No such file or directory 00:32:56.532 [2024-05-15 00:48:22.585200] nvme_tcp.c:2573:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:32:56.532 [2024-05-15 00:48:22.585224] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:32:56.532 [2024-05-15 00:48:22.585232] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:32:56.532 [2024-05-15 00:48:22.585241] bdev_nvme.c:6252:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:32:56.532 request: 00:32:56.532 { 00:32:56.532 "name": "nvme0", 00:32:56.532 "trtype": "tcp", 00:32:56.532 "traddr": "127.0.0.1", 00:32:56.532 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:56.532 "adrfam": "ipv4", 00:32:56.532 "trsvcid": "4420", 00:32:56.532 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:56.532 "psk": "key0", 00:32:56.532 "method": "bdev_nvme_attach_controller", 00:32:56.532 "req_id": 1 00:32:56.532 } 00:32:56.532 Got JSON-RPC error response 00:32:56.532 response: 00:32:56.532 { 00:32:56.532 "code": -19, 00:32:56.532 "message": "No such device" 00:32:56.532 } 00:32:56.532 00:48:22 keyring_file -- common/autotest_common.sh@652 -- # es=1 00:32:56.533 00:48:22 keyring_file -- common/autotest_common.sh@660 -- # (( es > 128 )) 00:32:56.533 00:48:22 keyring_file -- common/autotest_common.sh@671 -- # [[ -n '' ]] 00:32:56.533 00:48:22 keyring_file -- common/autotest_common.sh@676 -- # (( !es == 0 )) 00:32:56.533 00:48:22 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:32:56.533 00:48:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:56.792 00:48:22 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:32:56.792 00:48:22 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:32:56.792 00:48:22 keyring_file -- keyring/common.sh@17 -- # name=key0 00:32:56.792 00:48:22 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:32:56.792 00:48:22 keyring_file -- keyring/common.sh@17 -- # digest=0 00:32:56.792 00:48:22 keyring_file -- keyring/common.sh@18 -- # mktemp 00:32:56.792 00:48:22 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SepCyTwpy0 00:32:56.792 00:48:22 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:32:56.792 00:48:22 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:32:56.792 00:48:22 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:32:56.792 00:48:22 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:32:56.792 00:48:22 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:32:56.792 00:48:22 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:32:56.792 00:48:22 keyring_file -- nvmf/common.sh@705 -- # python - 00:32:56.793 00:48:22 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SepCyTwpy0 00:32:56.793 00:48:22 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SepCyTwpy0 00:32:56.793 00:48:22 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.SepCyTwpy0 00:32:56.793 00:48:22 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SepCyTwpy0 00:32:56.793 00:48:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SepCyTwpy0 00:32:57.052 00:48:23 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:57.052 00:48:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:57.309 nvme0n1 00:32:57.309 00:48:23 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:32:57.309 00:48:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:57.309 00:48:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:57.309 00:48:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.309 00:48:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:57.309 00:48:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.309 00:48:23 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:32:57.309 00:48:23 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:32:57.309 00:48:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:32:57.565 00:48:23 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:32:57.565 00:48:23 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:32:57.565 00:48:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:57.565 00:48:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.565 00:48:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.565 00:48:23 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:32:57.565 00:48:23 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:32:57.565 00:48:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:32:57.565 00:48:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:32:57.565 00:48:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:32:57.565 00:48:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:32:57.565 00:48:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.821 00:48:23 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:32:57.821 00:48:23 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:32:57.821 00:48:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:32:57.821 00:48:23 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:32:57.821 00:48:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:57.821 00:48:23 keyring_file -- keyring/file.sh@104 -- # jq length 00:32:58.078 00:48:24 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:32:58.078 00:48:24 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.SepCyTwpy0 00:32:58.078 00:48:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.SepCyTwpy0 00:32:58.078 00:48:24 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.InnPciqiEb 00:32:58.078 00:48:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.InnPciqiEb 00:32:58.337 00:48:24 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:58.337 00:48:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:32:58.597 nvme0n1 00:32:58.597 00:48:24 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:32:58.597 00:48:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:32:58.597 00:48:24 keyring_file -- keyring/file.sh@112 -- # config='{ 00:32:58.597 "subsystems": [ 00:32:58.597 { 00:32:58.597 "subsystem": "keyring", 00:32:58.597 "config": [ 00:32:58.597 { 00:32:58.597 "method": "keyring_file_add_key", 00:32:58.597 "params": { 00:32:58.597 "name": "key0", 00:32:58.597 "path": "/tmp/tmp.SepCyTwpy0" 00:32:58.597 } 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "method": "keyring_file_add_key", 00:32:58.597 "params": { 00:32:58.597 "name": "key1", 00:32:58.597 "path": "/tmp/tmp.InnPciqiEb" 00:32:58.597 } 00:32:58.597 } 00:32:58.597 ] 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "subsystem": "iobuf", 00:32:58.597 "config": [ 00:32:58.597 { 00:32:58.597 "method": "iobuf_set_options", 00:32:58.597 "params": { 00:32:58.597 "small_pool_count": 8192, 00:32:58.597 "large_pool_count": 1024, 00:32:58.597 "small_bufsize": 8192, 00:32:58.597 "large_bufsize": 135168 00:32:58.597 } 00:32:58.597 } 00:32:58.597 ] 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "subsystem": "sock", 00:32:58.597 "config": [ 00:32:58.597 { 00:32:58.597 "method": "sock_impl_set_options", 00:32:58.597 "params": { 00:32:58.597 "impl_name": "posix", 00:32:58.597 "recv_buf_size": 2097152, 00:32:58.597 "send_buf_size": 2097152, 00:32:58.597 "enable_recv_pipe": true, 00:32:58.597 "enable_quickack": false, 00:32:58.597 "enable_placement_id": 0, 00:32:58.597 "enable_zerocopy_send_server": true, 00:32:58.597 "enable_zerocopy_send_client": false, 00:32:58.597 "zerocopy_threshold": 0, 00:32:58.597 "tls_version": 0, 00:32:58.597 "enable_ktls": false 00:32:58.597 } 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "method": "sock_impl_set_options", 00:32:58.597 "params": { 00:32:58.597 "impl_name": "ssl", 00:32:58.597 "recv_buf_size": 4096, 00:32:58.597 "send_buf_size": 4096, 00:32:58.597 "enable_recv_pipe": true, 00:32:58.597 "enable_quickack": false, 00:32:58.597 "enable_placement_id": 0, 00:32:58.597 "enable_zerocopy_send_server": true, 00:32:58.597 "enable_zerocopy_send_client": false, 00:32:58.597 "zerocopy_threshold": 0, 00:32:58.597 "tls_version": 0, 00:32:58.597 "enable_ktls": false 00:32:58.597 } 00:32:58.597 } 00:32:58.597 ] 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "subsystem": "vmd", 00:32:58.597 "config": [] 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "subsystem": "accel", 00:32:58.597 "config": [ 00:32:58.597 { 00:32:58.597 "method": "accel_set_options", 00:32:58.597 "params": { 00:32:58.597 "small_cache_size": 128, 00:32:58.597 "large_cache_size": 16, 00:32:58.597 "task_count": 2048, 00:32:58.597 "sequence_count": 2048, 00:32:58.597 "buf_count": 2048 00:32:58.597 } 00:32:58.597 } 00:32:58.597 ] 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "subsystem": "bdev", 00:32:58.597 "config": [ 00:32:58.597 { 00:32:58.597 "method": "bdev_set_options", 00:32:58.597 "params": { 00:32:58.597 "bdev_io_pool_size": 65535, 00:32:58.597 "bdev_io_cache_size": 256, 00:32:58.597 "bdev_auto_examine": true, 00:32:58.597 "iobuf_small_cache_size": 128, 00:32:58.597 "iobuf_large_cache_size": 16 00:32:58.597 } 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "method": "bdev_raid_set_options", 00:32:58.597 "params": { 00:32:58.597 "process_window_size_kb": 1024 00:32:58.597 } 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "method": "bdev_iscsi_set_options", 00:32:58.597 "params": { 00:32:58.597 "timeout_sec": 30 00:32:58.597 } 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "method": "bdev_nvme_set_options", 00:32:58.597 "params": { 00:32:58.597 "action_on_timeout": "none", 00:32:58.597 "timeout_us": 0, 00:32:58.597 "timeout_admin_us": 0, 00:32:58.597 "keep_alive_timeout_ms": 10000, 00:32:58.597 "arbitration_burst": 0, 00:32:58.597 "low_priority_weight": 0, 00:32:58.597 "medium_priority_weight": 0, 00:32:58.597 "high_priority_weight": 0, 00:32:58.597 "nvme_adminq_poll_period_us": 10000, 00:32:58.597 "nvme_ioq_poll_period_us": 0, 00:32:58.597 "io_queue_requests": 512, 00:32:58.597 "delay_cmd_submit": true, 00:32:58.597 "transport_retry_count": 4, 00:32:58.597 "bdev_retry_count": 3, 00:32:58.597 "transport_ack_timeout": 0, 00:32:58.597 "ctrlr_loss_timeout_sec": 0, 00:32:58.597 "reconnect_delay_sec": 0, 00:32:58.597 "fast_io_fail_timeout_sec": 0, 00:32:58.597 "disable_auto_failback": false, 00:32:58.597 "generate_uuids": false, 00:32:58.597 "transport_tos": 0, 00:32:58.597 "nvme_error_stat": false, 00:32:58.597 "rdma_srq_size": 0, 00:32:58.597 "io_path_stat": false, 00:32:58.597 "allow_accel_sequence": false, 00:32:58.597 "rdma_max_cq_size": 0, 00:32:58.597 "rdma_cm_event_timeout_ms": 0, 00:32:58.597 "dhchap_digests": [ 00:32:58.597 "sha256", 00:32:58.597 "sha384", 00:32:58.597 "sha512" 00:32:58.597 ], 00:32:58.597 "dhchap_dhgroups": [ 00:32:58.597 "null", 00:32:58.597 "ffdhe2048", 00:32:58.597 "ffdhe3072", 00:32:58.597 "ffdhe4096", 00:32:58.597 "ffdhe6144", 00:32:58.597 "ffdhe8192" 00:32:58.597 ] 00:32:58.597 } 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "method": "bdev_nvme_attach_controller", 00:32:58.597 "params": { 00:32:58.597 "name": "nvme0", 00:32:58.597 "trtype": "TCP", 00:32:58.597 "adrfam": "IPv4", 00:32:58.597 "traddr": "127.0.0.1", 00:32:58.597 "trsvcid": "4420", 00:32:58.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:58.597 "prchk_reftag": false, 00:32:58.597 "prchk_guard": false, 00:32:58.597 "ctrlr_loss_timeout_sec": 0, 00:32:58.597 "reconnect_delay_sec": 0, 00:32:58.597 "fast_io_fail_timeout_sec": 0, 00:32:58.597 "psk": "key0", 00:32:58.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:58.597 "hdgst": false, 00:32:58.597 "ddgst": false 00:32:58.597 } 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "method": "bdev_nvme_set_hotplug", 00:32:58.597 "params": { 00:32:58.597 "period_us": 100000, 00:32:58.597 "enable": false 00:32:58.597 } 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "method": "bdev_wait_for_examine" 00:32:58.597 } 00:32:58.597 ] 00:32:58.597 }, 00:32:58.597 { 00:32:58.597 "subsystem": "nbd", 00:32:58.597 "config": [] 00:32:58.597 } 00:32:58.597 ] 00:32:58.597 }' 00:32:58.597 00:48:24 keyring_file -- keyring/file.sh@114 -- # killprocess 2234764 00:32:58.597 00:48:24 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2234764 ']' 00:32:58.597 00:48:24 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2234764 00:32:58.858 00:48:24 keyring_file -- common/autotest_common.sh@952 -- # uname 00:32:58.858 00:48:24 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:32:58.858 00:48:24 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2234764 00:32:58.858 00:48:24 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:32:58.858 00:48:24 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:32:58.858 00:48:24 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2234764' 00:32:58.858 killing process with pid 2234764 00:32:58.858 00:48:24 keyring_file -- common/autotest_common.sh@966 -- # kill 2234764 00:32:58.858 Received shutdown signal, test time was about 1.000000 seconds 00:32:58.858 00:32:58.858 Latency(us) 00:32:58.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:58.858 =================================================================================================================== 00:32:58.858 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:58.858 00:48:24 keyring_file -- common/autotest_common.sh@971 -- # wait 2234764 00:32:59.118 00:48:25 keyring_file -- keyring/file.sh@117 -- # bperfpid=2236382 00:32:59.118 00:48:25 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2236382 /var/tmp/bperf.sock 00:32:59.118 00:48:25 keyring_file -- common/autotest_common.sh@828 -- # '[' -z 2236382 ']' 00:32:59.118 00:48:25 keyring_file -- common/autotest_common.sh@832 -- # local rpc_addr=/var/tmp/bperf.sock 00:32:59.118 00:48:25 keyring_file -- common/autotest_common.sh@833 -- # local max_retries=100 00:32:59.118 00:48:25 keyring_file -- common/autotest_common.sh@835 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:32:59.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:32:59.118 00:48:25 keyring_file -- common/autotest_common.sh@837 -- # xtrace_disable 00:32:59.118 00:48:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:32:59.118 00:48:25 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:32:59.118 00:48:25 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:32:59.118 "subsystems": [ 00:32:59.118 { 00:32:59.118 "subsystem": "keyring", 00:32:59.118 "config": [ 00:32:59.118 { 00:32:59.118 "method": "keyring_file_add_key", 00:32:59.118 "params": { 00:32:59.118 "name": "key0", 00:32:59.118 "path": "/tmp/tmp.SepCyTwpy0" 00:32:59.118 } 00:32:59.118 }, 00:32:59.118 { 00:32:59.118 "method": "keyring_file_add_key", 00:32:59.118 "params": { 00:32:59.118 "name": "key1", 00:32:59.118 "path": "/tmp/tmp.InnPciqiEb" 00:32:59.118 } 00:32:59.118 } 00:32:59.118 ] 00:32:59.118 }, 00:32:59.118 { 00:32:59.118 "subsystem": "iobuf", 00:32:59.118 "config": [ 00:32:59.118 { 00:32:59.118 "method": "iobuf_set_options", 00:32:59.118 "params": { 00:32:59.118 "small_pool_count": 8192, 00:32:59.118 "large_pool_count": 1024, 00:32:59.118 "small_bufsize": 8192, 00:32:59.118 "large_bufsize": 135168 00:32:59.118 } 00:32:59.118 } 00:32:59.118 ] 00:32:59.118 }, 00:32:59.118 { 00:32:59.118 "subsystem": "sock", 00:32:59.118 "config": [ 00:32:59.118 { 00:32:59.118 "method": "sock_impl_set_options", 00:32:59.118 "params": { 00:32:59.118 "impl_name": "posix", 00:32:59.118 "recv_buf_size": 2097152, 00:32:59.118 "send_buf_size": 2097152, 00:32:59.118 "enable_recv_pipe": true, 00:32:59.118 "enable_quickack": false, 00:32:59.118 "enable_placement_id": 0, 00:32:59.118 "enable_zerocopy_send_server": true, 00:32:59.118 "enable_zerocopy_send_client": false, 00:32:59.118 "zerocopy_threshold": 0, 00:32:59.118 "tls_version": 0, 00:32:59.118 "enable_ktls": false 00:32:59.118 } 00:32:59.118 }, 00:32:59.118 { 00:32:59.118 "method": "sock_impl_set_options", 00:32:59.118 "params": { 00:32:59.118 "impl_name": "ssl", 00:32:59.118 "recv_buf_size": 4096, 00:32:59.118 "send_buf_size": 4096, 00:32:59.118 "enable_recv_pipe": true, 00:32:59.118 "enable_quickack": false, 00:32:59.118 "enable_placement_id": 0, 00:32:59.118 "enable_zerocopy_send_server": true, 00:32:59.118 "enable_zerocopy_send_client": false, 00:32:59.118 "zerocopy_threshold": 0, 00:32:59.118 "tls_version": 0, 00:32:59.118 "enable_ktls": false 00:32:59.118 } 00:32:59.118 } 00:32:59.118 ] 00:32:59.118 }, 00:32:59.118 { 00:32:59.118 "subsystem": "vmd", 00:32:59.118 "config": [] 00:32:59.118 }, 00:32:59.118 { 00:32:59.118 "subsystem": "accel", 00:32:59.118 "config": [ 00:32:59.118 { 00:32:59.118 "method": "accel_set_options", 00:32:59.118 "params": { 00:32:59.118 "small_cache_size": 128, 00:32:59.118 "large_cache_size": 16, 00:32:59.118 "task_count": 2048, 00:32:59.119 "sequence_count": 2048, 00:32:59.119 "buf_count": 2048 00:32:59.119 } 00:32:59.119 } 00:32:59.119 ] 00:32:59.119 }, 00:32:59.119 { 00:32:59.119 "subsystem": "bdev", 00:32:59.119 "config": [ 00:32:59.119 { 00:32:59.119 "method": "bdev_set_options", 00:32:59.119 "params": { 00:32:59.119 "bdev_io_pool_size": 65535, 00:32:59.119 "bdev_io_cache_size": 256, 00:32:59.119 "bdev_auto_examine": true, 00:32:59.119 "iobuf_small_cache_size": 128, 00:32:59.119 "iobuf_large_cache_size": 16 00:32:59.119 } 00:32:59.119 }, 00:32:59.119 { 00:32:59.119 "method": "bdev_raid_set_options", 00:32:59.119 "params": { 00:32:59.119 "process_window_size_kb": 1024 00:32:59.119 } 00:32:59.119 }, 00:32:59.119 { 00:32:59.119 "method": "bdev_iscsi_set_options", 00:32:59.119 "params": { 00:32:59.119 "timeout_sec": 30 00:32:59.119 } 00:32:59.119 }, 00:32:59.119 { 00:32:59.119 "method": "bdev_nvme_set_options", 00:32:59.119 "params": { 00:32:59.119 "action_on_timeout": "none", 00:32:59.119 "timeout_us": 0, 00:32:59.119 "timeout_admin_us": 0, 00:32:59.119 "keep_alive_timeout_ms": 10000, 00:32:59.119 "arbitration_burst": 0, 00:32:59.119 "low_priority_weight": 0, 00:32:59.119 "medium_priority_weight": 0, 00:32:59.119 "high_priority_weight": 0, 00:32:59.119 "nvme_adminq_poll_period_us": 10000, 00:32:59.119 "nvme_ioq_poll_period_us": 0, 00:32:59.119 "io_queue_requests": 512, 00:32:59.119 "delay_cmd_submit": true, 00:32:59.119 "transport_retry_count": 4, 00:32:59.119 "bdev_retry_count": 3, 00:32:59.119 "transport_ack_timeout": 0, 00:32:59.119 "ctrlr_loss_timeout_sec": 0, 00:32:59.119 "reconnect_delay_sec": 0, 00:32:59.119 "fast_io_fail_timeout_sec": 0, 00:32:59.119 "disable_auto_failback": false, 00:32:59.119 "generate_uuids": false, 00:32:59.119 "transport_tos": 0, 00:32:59.119 "nvme_error_stat": false, 00:32:59.119 "rdma_srq_size": 0, 00:32:59.119 "io_path_stat": false, 00:32:59.119 "allow_accel_sequence": false, 00:32:59.119 "rdma_max_cq_size": 0, 00:32:59.119 "rdma_cm_event_timeout_ms": 0, 00:32:59.119 "dhchap_digests": [ 00:32:59.119 "sha256", 00:32:59.119 "sha384", 00:32:59.119 "sha512" 00:32:59.119 ], 00:32:59.119 "dhchap_dhgroups": [ 00:32:59.119 "null", 00:32:59.119 "ffdhe2048", 00:32:59.119 "ffdhe3072", 00:32:59.119 "ffdhe4096", 00:32:59.119 "ffdhe6144", 00:32:59.119 "ffdhe8192" 00:32:59.119 ] 00:32:59.119 } 00:32:59.119 }, 00:32:59.119 { 00:32:59.119 "method": "bdev_nvme_attach_controller", 00:32:59.119 "params": { 00:32:59.119 "name": "nvme0", 00:32:59.119 "trtype": "TCP", 00:32:59.119 "adrfam": "IPv4", 00:32:59.119 "traddr": "127.0.0.1", 00:32:59.119 "trsvcid": "4420", 00:32:59.119 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:59.119 "prchk_reftag": false, 00:32:59.119 "prchk_guard": false, 00:32:59.119 "ctrlr_loss_timeout_sec": 0, 00:32:59.119 "reconnect_delay_sec": 0, 00:32:59.119 "fast_io_fail_timeout_sec": 0, 00:32:59.119 "psk": "key0", 00:32:59.119 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:59.119 "hdgst": false, 00:32:59.119 "ddgst": false 00:32:59.119 } 00:32:59.119 }, 00:32:59.119 { 00:32:59.119 "method": "bdev_nvme_set_hotplug", 00:32:59.119 "params": { 00:32:59.119 "period_us": 100000, 00:32:59.119 "enable": false 00:32:59.119 } 00:32:59.119 }, 00:32:59.119 { 00:32:59.119 "method": "bdev_wait_for_examine" 00:32:59.119 } 00:32:59.119 ] 00:32:59.119 }, 00:32:59.119 { 00:32:59.119 "subsystem": "nbd", 00:32:59.119 "config": [] 00:32:59.119 } 00:32:59.119 ] 00:32:59.119 }' 00:32:59.119 [2024-05-15 00:48:25.246053] Starting SPDK v24.05-pre git sha1 68960dff2 / DPDK 23.11.0 initialization... 00:32:59.119 [2024-05-15 00:48:25.246178] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2236382 ] 00:32:59.377 EAL: No free 2048 kB hugepages reported on node 1 00:32:59.377 [2024-05-15 00:48:25.361489] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:59.377 [2024-05-15 00:48:25.455497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.635 [2024-05-15 00:48:25.669822] bdev_nvme_rpc.c: 518:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:32:59.892 00:48:25 keyring_file -- common/autotest_common.sh@857 -- # (( i == 0 )) 00:32:59.892 00:48:25 keyring_file -- common/autotest_common.sh@861 -- # return 0 00:32:59.892 00:48:25 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:32:59.892 00:48:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:32:59.892 00:48:25 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:00.150 00:48:26 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:00.150 00:48:26 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:00.150 00:48:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:00.150 00:48:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:00.150 00:48:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.150 00:48:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:00.150 00:48:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.150 00:48:26 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:00.150 00:48:26 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:00.150 00:48:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:00.150 00:48:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:00.150 00:48:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:00.150 00:48:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:00.150 00:48:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:00.408 00:48:26 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:00.408 00:48:26 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:00.408 00:48:26 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:00.408 00:48:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:00.408 00:48:26 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:00.408 00:48:26 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:00.408 00:48:26 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.SepCyTwpy0 /tmp/tmp.InnPciqiEb 00:33:00.408 00:48:26 keyring_file -- keyring/file.sh@20 -- # killprocess 2236382 00:33:00.408 00:48:26 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2236382 ']' 00:33:00.408 00:48:26 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2236382 00:33:00.408 00:48:26 keyring_file -- common/autotest_common.sh@952 -- # uname 00:33:00.408 00:48:26 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:00.408 00:48:26 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2236382 00:33:00.408 00:48:26 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_1 00:33:00.408 00:48:26 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_1 = sudo ']' 00:33:00.408 00:48:26 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2236382' 00:33:00.408 killing process with pid 2236382 00:33:00.408 00:48:26 keyring_file -- common/autotest_common.sh@966 -- # kill 2236382 00:33:00.408 00:48:26 keyring_file -- common/autotest_common.sh@971 -- # wait 2236382 00:33:00.408 Received shutdown signal, test time was about 1.000000 seconds 00:33:00.408 00:33:00.408 Latency(us) 00:33:00.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.408 =================================================================================================================== 00:33:00.408 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:00.978 00:48:26 keyring_file -- keyring/file.sh@21 -- # killprocess 2234466 00:33:00.978 00:48:26 keyring_file -- common/autotest_common.sh@947 -- # '[' -z 2234466 ']' 00:33:00.978 00:48:26 keyring_file -- common/autotest_common.sh@951 -- # kill -0 2234466 00:33:00.978 00:48:26 keyring_file -- common/autotest_common.sh@952 -- # uname 00:33:00.978 00:48:26 keyring_file -- common/autotest_common.sh@952 -- # '[' Linux = Linux ']' 00:33:00.978 00:48:26 keyring_file -- common/autotest_common.sh@953 -- # ps --no-headers -o comm= 2234466 00:33:00.978 00:48:26 keyring_file -- common/autotest_common.sh@953 -- # process_name=reactor_0 00:33:00.978 00:48:26 keyring_file -- common/autotest_common.sh@957 -- # '[' reactor_0 = sudo ']' 00:33:00.978 00:48:26 keyring_file -- common/autotest_common.sh@965 -- # echo 'killing process with pid 2234466' 00:33:00.978 killing process with pid 2234466 00:33:00.978 00:48:26 keyring_file -- common/autotest_common.sh@966 -- # kill 2234466 00:33:00.978 00:48:26 keyring_file -- common/autotest_common.sh@971 -- # wait 2234466 00:33:00.978 [2024-05-15 00:48:26.965407] app.c:1024:log_deprecation_hits: *WARNING*: decode_rpc_listen_address: deprecation '[listen_]address.transport is deprecated in favor of trtype' scheduled for removal in v24.09 hit 1 times 00:33:00.978 [2024-05-15 00:48:26.965459] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:01.956 00:33:01.956 real 0m11.234s 00:33:01.956 user 0m25.197s 00:33:01.956 sys 0m2.556s 00:33:01.956 00:48:27 keyring_file -- common/autotest_common.sh@1123 -- # xtrace_disable 00:33:01.956 00:48:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:01.956 ************************************ 00:33:01.956 END TEST keyring_file 00:33:01.956 ************************************ 00:33:01.956 00:48:27 -- spdk/autotest.sh@292 -- # [[ n == y ]] 00:33:01.956 00:48:27 -- spdk/autotest.sh@304 -- # '[' 0 -eq 1 ']' 00:33:01.956 00:48:27 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:01.956 00:48:27 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:01.956 00:48:27 -- spdk/autotest.sh@317 -- # '[' 0 -eq 1 ']' 00:33:01.956 00:48:27 -- spdk/autotest.sh@326 -- # '[' 0 -eq 1 ']' 00:33:01.956 00:48:27 -- spdk/autotest.sh@331 -- # '[' 0 -eq 1 ']' 00:33:01.956 00:48:27 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:01.956 00:48:27 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:01.956 00:48:27 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:01.956 00:48:27 -- spdk/autotest.sh@348 -- # '[' 0 -eq 1 ']' 00:33:01.956 00:48:27 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:01.956 00:48:27 -- spdk/autotest.sh@359 -- # [[ 0 -eq 1 ]] 00:33:01.956 00:48:27 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:01.956 00:48:27 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:01.956 00:48:27 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:01.956 00:48:27 -- spdk/autotest.sh@376 -- # trap - SIGINT SIGTERM EXIT 00:33:01.956 00:48:27 -- spdk/autotest.sh@378 -- # timing_enter post_cleanup 00:33:01.956 00:48:27 -- common/autotest_common.sh@721 -- # xtrace_disable 00:33:01.956 00:48:27 -- common/autotest_common.sh@10 -- # set +x 00:33:01.956 00:48:27 -- spdk/autotest.sh@379 -- # autotest_cleanup 00:33:01.956 00:48:27 -- common/autotest_common.sh@1389 -- # local autotest_es=0 00:33:01.956 00:48:27 -- common/autotest_common.sh@1390 -- # xtrace_disable 00:33:01.956 00:48:27 -- common/autotest_common.sh@10 -- # set +x 00:33:07.222 INFO: APP EXITING 00:33:07.222 INFO: killing all VMs 00:33:07.222 INFO: killing vhost app 00:33:07.222 INFO: EXIT DONE 00:33:09.756 0000:c9:00.0 (8086 0a54): Already using the nvme driver 00:33:09.756 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:33:09.756 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:33:09.756 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:33:09.756 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:33:09.756 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:33:09.756 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:33:09.756 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:33:09.756 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:33:09.756 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:33:09.756 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:33:09.756 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:33:09.756 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:33:09.756 0000:ca:00.0 (8086 0a54): Already using the nvme driver 00:33:09.756 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:33:09.756 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:33:09.756 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:33:09.756 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:33:13.048 Cleaning 00:33:13.048 Removing: /var/run/dpdk/spdk0/config 00:33:13.048 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:13.048 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:13.048 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:13.048 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:13.048 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:13.048 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:13.048 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:13.048 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:13.048 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:13.048 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:13.048 Removing: /var/run/dpdk/spdk1/config 00:33:13.048 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:13.048 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:13.048 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:13.048 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:13.048 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:13.048 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:13.048 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:13.048 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:13.048 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:13.048 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:13.048 Removing: /var/run/dpdk/spdk2/config 00:33:13.048 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:13.048 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:13.048 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:13.048 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:13.048 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:13.048 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:13.048 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:13.048 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:13.048 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:13.048 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:13.048 Removing: /var/run/dpdk/spdk3/config 00:33:13.048 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:13.048 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:13.048 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:13.048 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:13.048 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:13.048 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:13.048 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:13.048 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:13.048 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:13.048 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:13.048 Removing: /var/run/dpdk/spdk4/config 00:33:13.048 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:13.048 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:13.048 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:13.048 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:13.048 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:13.048 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:13.309 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:13.309 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:13.309 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:13.309 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:13.309 Removing: /dev/shm/nvmf_trace.0 00:33:13.309 Removing: /dev/shm/spdk_tgt_trace.pid1788638 00:33:13.309 Removing: /var/run/dpdk/spdk0 00:33:13.309 Removing: /var/run/dpdk/spdk1 00:33:13.309 Removing: /var/run/dpdk/spdk2 00:33:13.309 Removing: /var/run/dpdk/spdk3 00:33:13.309 Removing: /var/run/dpdk/spdk4 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1783308 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1785477 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1788638 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1789492 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1791095 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1791442 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1792688 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1792742 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1793347 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1796224 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1798441 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1798947 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1799434 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1799818 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1800200 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1800514 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1800829 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1801173 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1801820 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1805064 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1805601 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1806002 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1806019 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1806940 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1807057 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1807878 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1808174 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1808511 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1808535 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1808880 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1809158 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1809862 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1810181 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1810540 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1813269 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1814835 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1816646 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1818588 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1820553 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1822367 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1824374 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1826399 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1828638 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1830451 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1832537 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1834354 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1836178 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1838258 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1840069 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1841988 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1843980 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1845796 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1847808 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1849699 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1851667 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1853619 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1855440 00:33:13.309 Removing: /var/run/dpdk/spdk_pid1857807 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1860370 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1864766 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1916684 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1921593 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1933550 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1939849 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1944633 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1945251 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1956512 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1956846 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1961934 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1968550 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1971595 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1984165 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1994688 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1996778 00:33:13.570 Removing: /var/run/dpdk/spdk_pid1997972 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2017946 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2022564 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2049599 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2054659 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2056521 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2058826 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2059016 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2059233 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2059543 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2060451 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2062571 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2063833 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2064472 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2067026 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2067825 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2068754 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2073457 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2080454 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2085263 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2093718 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2093722 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2099748 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2100045 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2100348 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2100931 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2100949 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2106293 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2107070 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2112013 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2115331 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2121647 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2127990 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2138027 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2146714 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2146717 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2168225 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2170353 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2172559 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2174838 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2178468 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2179359 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2179987 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2180880 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2182278 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2183573 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2184238 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2185084 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2186432 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2195771 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2195782 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2201973 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2204296 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2206830 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2208459 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2210998 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2212488 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2223155 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2223746 00:33:13.570 Removing: /var/run/dpdk/spdk_pid2224342 00:33:13.830 Removing: /var/run/dpdk/spdk_pid2227973 00:33:13.830 Removing: /var/run/dpdk/spdk_pid2228578 00:33:13.830 Removing: /var/run/dpdk/spdk_pid2229240 00:33:13.830 Removing: /var/run/dpdk/spdk_pid2234466 00:33:13.830 Removing: /var/run/dpdk/spdk_pid2234764 00:33:13.830 Removing: /var/run/dpdk/spdk_pid2236382 00:33:13.830 Clean 00:33:13.830 00:48:39 -- common/autotest_common.sh@1448 -- # return 0 00:33:13.830 00:48:39 -- spdk/autotest.sh@380 -- # timing_exit post_cleanup 00:33:13.830 00:48:39 -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:13.830 00:48:39 -- common/autotest_common.sh@10 -- # set +x 00:33:13.830 00:48:39 -- spdk/autotest.sh@382 -- # timing_exit autotest 00:33:13.830 00:48:39 -- common/autotest_common.sh@727 -- # xtrace_disable 00:33:13.830 00:48:39 -- common/autotest_common.sh@10 -- # set +x 00:33:13.830 00:48:39 -- spdk/autotest.sh@383 -- # chmod a+r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:33:13.830 00:48:39 -- spdk/autotest.sh@385 -- # [[ -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log ]] 00:33:13.830 00:48:39 -- spdk/autotest.sh@385 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log 00:33:13.830 00:48:39 -- spdk/autotest.sh@387 -- # hash lcov 00:33:13.830 00:48:39 -- spdk/autotest.sh@387 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:13.830 00:48:39 -- spdk/autotest.sh@389 -- # hostname 00:33:13.830 00:48:39 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/dsa-phy-autotest/spdk -t spdk-fcp-07 -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info 00:33:14.091 geninfo: WARNING: invalid characters removed from testname! 00:33:36.043 00:49:00 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:33:36.612 00:49:02 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:33:37.995 00:49:03 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:33:39.380 00:49:05 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:33:40.763 00:49:06 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:33:42.146 00:49:07 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:33:43.527 00:49:09 -- spdk/autotest.sh@396 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:43.527 00:49:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:33:43.527 00:49:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:43.527 00:49:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:43.527 00:49:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:43.527 00:49:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.527 00:49:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.527 00:49:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.527 00:49:09 -- paths/export.sh@5 -- $ export PATH 00:33:43.527 00:49:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:43.527 00:49:09 -- common/autobuild_common.sh@436 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:33:43.527 00:49:09 -- common/autobuild_common.sh@437 -- $ date +%s 00:33:43.527 00:49:09 -- common/autobuild_common.sh@437 -- $ mktemp -dt spdk_1715726949.XXXXXX 00:33:43.527 00:49:09 -- common/autobuild_common.sh@437 -- $ SPDK_WORKSPACE=/tmp/spdk_1715726949.Pw08gN 00:33:43.527 00:49:09 -- common/autobuild_common.sh@439 -- $ [[ -n '' ]] 00:33:43.527 00:49:09 -- common/autobuild_common.sh@443 -- $ '[' -n '' ']' 00:33:43.527 00:49:09 -- common/autobuild_common.sh@446 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:33:43.527 00:49:09 -- common/autobuild_common.sh@450 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:43.527 00:49:09 -- common/autobuild_common.sh@452 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:43.527 00:49:09 -- common/autobuild_common.sh@453 -- $ get_config_params 00:33:43.527 00:49:09 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:33:43.527 00:49:09 -- common/autotest_common.sh@10 -- $ set +x 00:33:43.527 00:49:09 -- common/autobuild_common.sh@453 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:33:43.528 00:49:09 -- common/autobuild_common.sh@455 -- $ start_monitor_resources 00:33:43.528 00:49:09 -- pm/common@17 -- $ local monitor 00:33:43.528 00:49:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:43.528 00:49:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:43.528 00:49:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:43.528 00:49:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:43.528 00:49:09 -- pm/common@21 -- $ date +%s 00:33:43.528 00:49:09 -- pm/common@21 -- $ date +%s 00:33:43.528 00:49:09 -- pm/common@25 -- $ sleep 1 00:33:43.528 00:49:09 -- pm/common@21 -- $ date +%s 00:33:43.528 00:49:09 -- pm/common@21 -- $ date +%s 00:33:43.528 00:49:09 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715726949 00:33:43.528 00:49:09 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715726949 00:33:43.528 00:49:09 -- pm/common@21 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715726949 00:33:43.528 00:49:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1715726949 00:33:43.528 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715726949_collect-vmstat.pm.log 00:33:43.528 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715726949_collect-cpu-temp.pm.log 00:33:43.528 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715726949_collect-cpu-load.pm.log 00:33:43.528 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1715726949_collect-bmc-pm.bmc.pm.log 00:33:44.469 00:49:10 -- common/autobuild_common.sh@456 -- $ trap stop_monitor_resources EXIT 00:33:44.469 00:49:10 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j128 00:33:44.469 00:49:10 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:33:44.469 00:49:10 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:44.469 00:49:10 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:44.469 00:49:10 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:44.469 00:49:10 -- common/autotest_common.sh@733 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:44.469 00:49:10 -- common/autotest_common.sh@734 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:44.469 00:49:10 -- common/autotest_common.sh@736 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:33:44.469 00:49:10 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:44.469 00:49:10 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:44.469 00:49:10 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:44.469 00:49:10 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:44.469 00:49:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:44.469 00:49:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:44.469 00:49:10 -- pm/common@44 -- $ pid=2247675 00:33:44.469 00:49:10 -- pm/common@50 -- $ kill -TERM 2247675 00:33:44.469 00:49:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:44.469 00:49:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:44.469 00:49:10 -- pm/common@44 -- $ pid=2247677 00:33:44.469 00:49:10 -- pm/common@50 -- $ kill -TERM 2247677 00:33:44.469 00:49:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:44.469 00:49:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:44.469 00:49:10 -- pm/common@44 -- $ pid=2247679 00:33:44.469 00:49:10 -- pm/common@50 -- $ kill -TERM 2247679 00:33:44.469 00:49:10 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:44.469 00:49:10 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:44.469 00:49:10 -- pm/common@44 -- $ pid=2247708 00:33:44.469 00:49:10 -- pm/common@50 -- $ sudo -E kill -TERM 2247708 00:33:44.469 + [[ -n 1669909 ]] 00:33:44.469 + sudo kill 1669909 00:33:44.480 [Pipeline] } 00:33:44.500 [Pipeline] // stage 00:33:44.506 [Pipeline] } 00:33:44.525 [Pipeline] // timeout 00:33:44.530 [Pipeline] } 00:33:44.548 [Pipeline] // catchError 00:33:44.554 [Pipeline] } 00:33:44.573 [Pipeline] // wrap 00:33:44.579 [Pipeline] } 00:33:44.596 [Pipeline] // catchError 00:33:44.606 [Pipeline] stage 00:33:44.608 [Pipeline] { (Epilogue) 00:33:44.623 [Pipeline] catchError 00:33:44.625 [Pipeline] { 00:33:44.642 [Pipeline] echo 00:33:44.644 Cleanup processes 00:33:44.652 [Pipeline] sh 00:33:44.945 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:33:44.945 2248185 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:33:44.960 [Pipeline] sh 00:33:45.247 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:33:45.247 ++ grep -v 'sudo pgrep' 00:33:45.247 ++ awk '{print $1}' 00:33:45.247 + sudo kill -9 00:33:45.247 + true 00:33:45.258 [Pipeline] sh 00:33:45.546 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:55.569 [Pipeline] sh 00:33:55.921 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:55.921 Artifacts sizes are good 00:33:55.935 [Pipeline] archiveArtifacts 00:33:55.942 Archiving artifacts 00:33:56.125 [Pipeline] sh 00:33:56.411 + sudo chown -R sys_sgci /var/jenkins/workspace/dsa-phy-autotest 00:33:56.426 [Pipeline] cleanWs 00:33:56.437 [WS-CLEANUP] Deleting project workspace... 00:33:56.437 [WS-CLEANUP] Deferred wipeout is used... 00:33:56.443 [WS-CLEANUP] done 00:33:56.445 [Pipeline] } 00:33:56.464 [Pipeline] // catchError 00:33:56.475 [Pipeline] sh 00:33:56.758 + logger -p user.info -t JENKINS-CI 00:33:56.766 [Pipeline] } 00:33:56.781 [Pipeline] // stage 00:33:56.786 [Pipeline] } 00:33:56.802 [Pipeline] // node 00:33:56.807 [Pipeline] End of Pipeline 00:33:56.835 Finished: SUCCESS